A Large Scale Analysis of Unreliable Stochastic Networks
Résumé
The problem of reliability of a large distributed system is analyzed via a new mathematical model. A typical framework is a system where a set of files are duplicated on several data servers. When one of these servers breaks down, all copies of files stored on it are lost. They can be retrieved afterwards if copies of the same files are stored on some other servers. In the case where no other copy of a given file is present in the network, it is definitively lost. The efficiency of such a network in therefore directly related to the performances of the mechanism used to duplicate files on servers. In this paper the duplication process is assumed to be local, any server has a capacity to make copies to another server but it can only be used for the copies present on this server, contrary to previous models of the literature for which the duplication capacity could be used globally.
We study the asymptotic behavior of this system in a mean-field context, i.e. when the number $N$ of servers is large. The analysis is complicated by the large dimension of the state space of the empirical distribution of the state of the network. We introduce a stochastic model of the evolution of the network which has values in state space whose dimension does not depend on $N$. This description does not have the Markov property but it turns out that it is converging in distribution, as $N$ gets large, to a nonlinear Markov process. Additionally, this asymptotic process gives a limiting result on the rate of decay of the network which is the key characteristic of interest of these systems. Convergence results are established and we derive a lower bound on the exponential decay, with respect to time, of the fraction of the number of initial files with at least one copy. Stochastic calculus with marked Poisson processes, technical estimates and mean-field results are the main ingredients of the proofs of the results.