Portal:Complex Systems Digital Campus/E-Laboratory on Ubiquitous Computing

Portal:Complex_Systems_Digital_Campus/E-Laboratory_on_Ubiquitous Computing Ubiquitous computing

Today's technology makes it possible and even necessary to radically change the way we gather and process information, from the monolithic approach to the networked collaboration of a huge number of possibly heterogeneous comping units. This new approach should be based on distributed processing and storage, and will allow us to add intelligence to the different artefacts that are more and more present around us, and to compensate the foreseenable limits of classical Computer Science (end of the Moore era). This long term objective requires •	solving issues related to physical layout and communications (distributed routing and control) •	setting up self-regulating and self-managing processes •	designing new computing models •	specifying adaptive programming environments (using Machine Learning, retro-action and common sense). Challenges •	Local design for global properties (routing, control, confidentiality) •	Autonomic Computing (robustness, redundancy, fault tolerance) •	New computational models: distributing processing and storage, fusion of spatial, temporal and/or multi-modal data, abstraction emergence •	New programming paradigms: creation and grounding of symbols (including proof and validation) Keywords •	Peer to Peer networks (P2P) •	Ad hoc networks •	Observation of multiscale spatio-temporal phenomena (trophic networks, agriculture, meteorology, ...) •	Epidemic Algorithms •	Computational Models and Information Theory •	Spatial computing •	Self-aware systems •	Common Sense •	Privacy

Preamble It seems clear that we have today reached the technological limits of Von Neumann's sequential computational model. Hence new paradigms are necessary to fulfill the ever-growing demand for computational power of our modern society. The heart of those new paradigms is the distribution of computing tasks on decentralized architectures (e.g. multi-core processors and computer grids). The complexity of such systems is the price to pay to address the scaling and robustness issues of decentralized computing. Furthermore, it is now technologically possible to flood the environment with sensors and computing units wherever they are needed. However, an efficient use of widely distributed units can only be achieved through networking — and physical constraints limit the communication range of each unit to a few of its neighbors (ad hoc networks). At another scale, the concept of P2P networks also implies a limited visibility of the whole network. In both cases (ad hoc and P2P networks), the issue is to make an optimal use of the complete data that is available on the whole network. The challenges in this framework are targeted toward the new computational systems, but will also address some issues raised in social or environmental networks, that are treated in other pages of this road-map. Local design for global properties Routing, control and privacy

In order to better design and maintain large networks, we need to understand how global behaviors can emerge even though each element only has a very limited vision of the whole system, and makes decisions based on local information. A base model is that of epidemic algorithms, in which each element exchanges information with its neighbors only. The important issues are the type of informations that are exchanged (that should take into account privacy constraints) and the selection of corresponding neighbors. Both choices influence the global behavior of the system.

Methods: Information theory; dynamical systems; statistical physics; epidemic algorithms; bio-inspired algorithms Autonomic Computing Robustness, redundancy, fault tolerance

Large scale deployment of computational systems will not be possible without making those system autonomous, in a way that resemble properties of living systems: robustness, reliability, resilience, homeostasis. However, the size and heterogeneity of such systems make it difficult to come up with analytical models; moreover, the global behavior of the system also depends on the dynamical and adaptive behavior of the whole set of users.

Methods: Bio-inspired systems, self-aware systems. New computing paradigms Distributed processing and storage, fusion of spatial, temporal and/or multi-modal data, abstraction emergence

The networking of a large number of possibly heterogeneous computational units (grids, P2P, n-core processors) gathers a huge computational power. However, in order to efficiently use such power, new computing paradigms must be designed, that take into account the distribution of information processing on weak or slow units, and the low reliability of those units and of the communication channels. Similarly, data distribution (sensor networks, RFID, P2P) raises specific challenges: integration, fusion, spatio-temporal reconstruction, validation.

Methods: Neuro-mimetic algorithms, belief propagation. Specification of adaptive programming environments machine learning, retro-action and common sense

Programming ambient intelligence systems (domotic, aging, fitness) must include the user in the loop. The specification of the expected user behavior requires a transparent link between the low level data that are available and the user's natural concepts (e.g. symbol grounding). On the other hand, the research agenda must start by studying actual habits; such co-evolution of the user and the system leads to hybrid complex systems.

Methods: Brain Computer Interface, programming by demonstration, statistical learning.