What do you tell somebody who wants to use the Palm measure but does not condition on a point at the origin?

“You are missing the point.”

Skip to content
# Stochastic geometry (is) fun – part 2

# Averages, distributions, and meta distributions

A blog on stochastic geometry

Month: February 2021

What do you tell somebody who wants to use the Palm measure but does not condition on a point at the origin?

“You are missing the point.”

In this post I would like to show how meta distributions naturally emerge as an important extension of the concepts of averages and distributions. For a random variable *Z*, we call 𝔼(*Z*) its *average* (or *mean*). If we add a parameter *z* to compare *Z* against and form the family of random variables **1**(*Z*>*z*), we call their mean the *distribution* of *Z* (to be precise, the complementary cumulative distribution function, ccdf for short).

Now, if *Z* does not depend on any other randomness, then 𝔼**1**(*Z*>*z*) gives the complete information about all statistics of *Z*, i.e., the probability of any event can be expressed by adding or subtracting these elementary probabilities.

However, if Z is a function of other sources of randomness, then 𝔼**1**(Z>z) does not reveal how the statistics of *Z* depend on those of the individual random elements. In general *Z* may depend on many, possibly infinitely many, random variables and random elements (e.g., point processes), such as the SIR in a wireless network. Let us focus on the case *Z*=*f*(*X*,*Y*), where *X* and *Y* are independent random variables. Then, to discern how *X* and *Y* individually affect *Z*, we need to add a second parameter, say *x*, to extend the distribution to the *meta distribution:*

Alternatively,

Hence the meta distribution (MD) is defined by first conditioning on part of the randomness. It has two parameters, the distribution has one parameter, and the average has zero parameters. There is a natural progression from averages to distributions to meta distributions (and back), as illustrated in this figure:

From the top going down, we obtain more information about *Z* by adding indicators and parameters. Conversely, we can eliminate parameters by integration (taking averages). Letting *U* be the conditional ccdf given *Y*, i.e., *U*=𝔼_{X}**1**(*Z*>*z*)=𝔼[**1**(*Z*>*z*) | *Y*], it is apparent that the distribution of *Z* is the average of *U*, while the MD is the distribution of *U*.

Let us consider the example *Z*=*X*/*Y* , where *X* is exponential with mean 1 and *Y* is exponential with mean 1/μ, independent of *X*. The ccdf of *Z* is

In this case, the mean 𝔼(*Z*) does not exist. The conditional ccdf given *Y* is the random variable

and its distribution is the meta distribution

As expected, the ccdf of *Z* is retrieved by integration over *x*∈[0,1]. This MD has relevance in Poisson uplink cellular networks, where base stations (BSs) form a PPP Φ of intensity λ and the users are connected to the nearest BS. If the fading is Rayleigh fading and the path loss exponent is 2, the received power from a user at an arbitrary location is *S*=*X*/*Y*, where *X* is exponential with mean 1 and *Y* is exponential with mean 1/(λπ), exactly as in the example above. Hence the MD of the signal power *S* is

So what additional information do we get from the MD, compared to just the ccdf of *S*? Let us consider a realization of Φ and a set of users forming a lattice (any stationary point process of users would work) and determine each user’s individual probability that its received power exceeds 1:

If we draw a histogram of all the user’s probabilities (the numbers in the figure), how does it look? This cannot be answered by merely looking at the ccdf of *S*. In fact ℙ(*S*>1)=π/(π+1)≈0.76 is merely the average of all the numbers. To know their *distribution*, we need to consult the MD. From (1) the MD (for λ=1 and *z*=1) is 1-*x*^{π}. Hence the histogram of the numbers has the form of the probability density function π*x*^{π-1}. In contrast, without the MD, we have no information about the disparity between the users. Their personal probabilities could all be well concentrated around 0.76, or some could have probabilities near 0 and others near 1. Put differently, only the MD can reveal the performance of user percentiles, such as the “5% user” performance, which is the performance that 95% of the users achieve but 5% do not.

This interpretation of the MD as a distribution over space for a fixed realization of the point process is valid whenever the point process is ergodic.

Another application of the MD is discussed in an earlier post on the fraction of reliable links in a network.