Category: Meta distributions

Q cell analysis

Coverage is a key metric in wireless networks but usually only analyzed in terms of the covered area fraction, which is a scalar in [0,1]. Here we discuss a method to analytically bound the coverage manifold 𝒞, formed by the locations y on the plane where users are covered in the sense that the probability that the signal-to-interference ratio (SIR) exceeds a threshold θ is at least u (as argued here, this is a sensible definition of coverage):

\displaystyle\mathcal{C}\triangleq\{y\in\mathbb{R}^2\colon \mathbb{P}(\text{SIR}(y)>\theta)>u\}.

SIR(y) is the SIR measured at y for a given set of transmitters 𝒳={x1,x2,…} where the nearest transmitter provides the desired signal while all others interfere. It is given by

\displaystyle\text{SIR}(y)=\frac{h_1r_1^{-\alpha}}{\sum_{i>1}h_ir_i^{-\alpha}}=\frac{1}{\sum_{i>1}\frac{1}{H_i}\left(\frac{r_1}{r_i}\right)^{\alpha}},

where ri is the distance to the i-th nearest transmitter to y and the coefficients Hi=h1/hi are identically distributed with cdf FH. The coverage cell when x is the desired transmitter is denoted as Cx, so 𝒞 is the union of all Cx, x∈𝒳. To find an outer bound to the Cx, we are defining a cell consisting of locations at which there is no single interferer that causes them to be uncovered. In that cell, which we call the Q cell Qx, the ratio of the distance to the nearest interferer to the distance of the serving transmitter has to be larger than a suitably chosen parameter ρ, i.e.,

\displaystyle Q_x(\rho)\triangleq\left\{y\in\mathbb{R}^2\colon \frac{\|\mathcal{X}\setminus\{x\}-y\|}{\|x-y\|}>\rho\right\}.

The numerator is the distance to the nearest interferer to y. The shape of the Q cell is governed by a first key insight (which is 2200 years old):
Key Insight 1: Given two points, the locations where the ratio of the distances to the two points equals ρ form a circle whose center and radius depend on the two points and ρ.
For ρ>1, which is the regime of practical interest, the Q cell is the intersection of a number of disks, one for each interferer. For the more distant interferers, the radii of the disks get large that they are not restricting the Q cell. As a result, a Q cell consists of either just one disk, the intersection of two disks, or a polygon plus a small number of disk segments. In the example in Figure 1, only the four disks with black boundaries are relevant. Their centers and radii are given by the locations of the four strongest interferers, marked by red crosses.

Fig. 1. The boundary of the Q cell for ρ=2 of the transmitter at the origin is shown in red. It is formed by four black circles. The square is colored according to ℙ(SIR>1) for Rayleigh fading.

The remaining question is the connection between the distance ratio parameter ρ and the quality-of-service (QoS) parameters θ and u. It is given as

\displaystyle \rho=\sigma(\theta,u)^{1/\alpha},

where

\displaystyle \sigma(\theta,u)\triangleq\frac{\theta}{F_H^{-1}(1-u)}

is called the stringency and FH-1 is the quantile function of H (inverse of FH). For Rayleigh fading,

\displaystyle \sigma(\theta,u)=\theta\frac{u}{1-u}.

Figure 1 shows a Q cell for θ=1 and u=0.8, hence (with Rayleigh fading) ρα=4, and ρ=2 for α=4. It is apparent from the coloring that outside the Q cell, the reliability is strictly smaller than 0.8.
The second key important insight is that the Q cells only depend on ρ, so there is no need to explore the three-dimensional parameter space (θ, u, α) in analysis or visualization.
Key Insight 2: The Q cells are the same for all combinations of θ, u, and α that have the same ρ.
Accordingly, Figure 2, which shows Q cells for four values of ρ, encapsulates many combinations of θ, u, and α.

Fig. 2. Q cells for ρ=1.25, 1.50, 2.00, 2.50 for 25 transmitters.

If ρ=1, the Q cells equal the Voronoi cells, and for ρ<1, they span the entire plane minus an exclusion disk for each interferer. This case is less relevant in practice since it corresponds to very lax QoS requirements (small rate and/or low reliability, resulting in a stringency less than 1).

Since CxQx for each transmitter x, the union of Q cells is an outer bound to the coverage manifold.

If 𝒳 is a stationary and ergodic point process, the area fraction of the coverage manifold corresponds to the meta distribution (MD) of the SIR, and the area fraction covered by the Q cells is an upper bound to the MD. For the Poisson point process (PPP), the Q cells cover an area fraction of ρ-2, and for any stationary point process of transmitters, they cover less than 4/(1+ρ)2. This implies that for any stationary point process of users, at least a fraction 1-4/(1+ρ)2 of users is uncovered. This shows how quickly users become uncovered if the stringency increases.

Details on the calculation of the stringency for different fading models and the proofs of the properties of the Q cells can be found here. This paper also shows a method to refine the Q cells by “cutting corners”, which is possible since the corners of the Q cells have two interferers are equal distance but only one is taken into account. Further, if the MD is known, the refined Q cells can be shrunk so that their covered area fraction matches the MD; this way, accurate estimates of the coverage manifolds are obtained. Figure 3 shows an example of refined and scaled refined Q cells where the transmitters form a perturbed (noisy) square lattice, in comparison with the exact coverage cells Cx.

Fig. 3. Refined Q cells (blue) and estimated coverage cells (yellow) for ρ=√2. The boundaries of the exact coverage cells are shown in red.

Due to the simple geometric construction, bounding property, and general applicability to all fading and path loss models, Q cell analysis is useful for rapid yet accurate explorations of base station placements, resource allocation schemes, and advanced transmission techniques including base station cooperation (CoMP) and non-orthogonal multiple access (NOMA).

Taming the meta distribution

The derivation of meta distributions is mostly based on the calculation of the moments of the underlying conditional distribution. The reason is that except for highly simplistic scenarios, a direct calculation is elusive. Recall the definition of a meta distribution

\displaystyle\bar F(t,x)=\mathbb{P}(P_t>x),\qquad\text{\sf where }\;\;P_t=\mathbb{P}(X>t\mid\Phi).

Here X is the random variable we are interested in, and Φ is part of the random elements X depends on, usually a point process modeling the locations of wireless transceivers. The random variable Pt is a conditional distribution given Φ.

Using stochastic geometry, we can often derive moments of Pt. Since Pt has finite support, finding its distribution given the moments is a Hausdorff moment problem, which has a rich history in the mathematical literature but is not fully solved. The infinite sequence of integer moments uniquely defines the distribution, but in practice we are always restricted to a finite sequence, often less than 10 moments or so. This truncated version of the problem has infinitely many solutions, and the methods proposed to find or approximate the solution to the standard problem may or may not produce one of the solutions to the truncated one. In fact, they may not even provide an actual cumulative distribution function (cdf). On overview of the existing methods and some improvements can be found here.

In this blog, we focus on a method to find the infimum and supremum of the infinitely many possible cdfs that have a given finite moment sequence. The method is based on the Chebyshev-Markov inequalities. As the name suggests, it generalizes the well-known Markov and Chebyshev inequalities, which are based on moments sequences of length 1 or 2. The key step in this method is to find the maximum probability mass that can be concentrated at a point of interest, say z. This probability mass corresponds to the difference between the infimum and the supremum of the cdf at z. This technical report provides the details of the calculation.

Fig. 1 shows an example for the moment sequence mk =1/(k+1), for k ∈ [n], where the number n of given moments increases from 5 to 15. It can be observed how the infimum (red) and supremum (blue) curves approach each other as more moments are considered. For n → ∞, both would converge to the cdf of the uniform distribution on [0,1], i.e., F(x)=x. The supremum curve lower bounds the complementary cdf. For example, for n =15, ℙ(X>1/2)>0.4. This is the best bound that can be given since this value is achieved by a discrete distribution.

Fig. 1. Infima (red) and suprema (blue) of all the cdfs corresponding to the truncated moment problem where 5 to 15 moments are given.

The average of the infimum and supremum at each point naturally lends itself as an approximation of the meta distribution. It can be expected to be significantly more accurate than the usual beta approximation, which is based only on the first two moments.

An implementation is available in GitHub at https://github.com/ewa-haenggi/meta-distribution/blob/main/CMBounds.m.

Acknowledgment: I would like to thank my student Xinyun Wang for writing the Matlab code for the CM inequalities and providing the figure above.

Meta visualization

In this post we contrast the meta distribution of the SIR with the standard SIR distribution. The model is the standard downlink Poisson cellular network with Rayleigh fading and path loss exponent 4. The base station density is 1, and the users form a square lattice of density 5. Hence there are 5 users per cell on average.

Fig. 1: The setup. Base stations are red crosses, and users are blue circles. They are served by the nearest base station. Cell boundaries are dashed.

We assume the base stations transmit to the users in their cell at a rate such that the messages can be decoded if an SIR of θ =-3 dB is achieved. If the user succeeds in decoding, it is marked with a green square, otherwise red. We let this process run over many time slots, as shown next.

Fig. 2: Transmission success (green) and failure (red) at each user over 100 time slots.

The SIR meta distribution captures the per-user statistics, obtained by averaging over the fading, i.e., over time. In the next figure, the per-user reliabilities are illustrated using a color map from pure red (0% success) to pure green (100% success).

Fig. 3: Per-user reliabilities, obtained by averaging over fading. These are captured by the SIR meta distribution. Near the top left is a user that almost never succeeds since it is equidistant to three base stations.

The SIR meta distribution provides the fractions of users that are above (or below) a certain reliability threshold. For instance, the fraction of users that are at least dark green in the above figure.

In contrast, the standard SIR distribution, sometimes misleadingly called “coverage probability”, is just a single number, namely the average reliability, which is close to 70% in this scenario (see, e.g., Eqn. (1) in this post). Since it is obtained by averaging concurrently over fading and point process, the underlying network structure is lost, and the only information obtained is the average of the colors in Fig. 3:

Fig. 4: The standard SIR distribution only reveals the overall reliability of 70%.

The 70% reliability is that of the typical user (or typical location), which does not correspond to any user in our network realization. Instead, it is an abstract user whose statistics correspond to the average of all users.

Acknowledgment: The help of my Ph.D. student Xinyun Wang in writing the Matlab program for this post is greatly appreciated.

The transdimensional approach

In vehicular networks, transceivers are inherently confined to a subset of the two-dimensional Euclidean space. This subset is the street system where cars are allowed to move. Accordingly, stochastic geometry models for vehicular networks usually consist of two components: A set of streets and a set of point processes, one for each street, representing the vehicles. The most popular model is the Poisson line process (PLP) for the streets, combined with one-dimensional PPPs of vehicles on each line (street).

This PLP-PPP model does not have T-junctions, only intersections. Accordingly, all vehicles are of order 2 or 4, per the taxonomy introduced here. The order is determined by the number of directions in which a vehicle can move.

The PLP-PPP is a Cox process due to the independent one-dim. PPPs, and the underlying street system determining the intensity measure (the line process) is also based on the PPP. Consequently, the PLP-PPP inherits a certain level of tractability from the PPP, in the sense that exact expressions can be derived for some quantities of interest. In particular, the SIR distribution (complementary cumulative distribution function, ccdf) at the typical vehicle for a transmitter at a fixed distance can be derived without difficulty. However, the expression requires the evaluation of two nested improper integrals. While such a result certainly has its value, it does not give direct insight how the resulting SIR distribution depends on the network parameters. Also, other metrics that depend on the SIR often require further integration, most importantly the SIR meta distribution, which is calculated from the higher moments of the conditional SIR ccdf (given the point process).

This raises the question whether it is possible to find a closed-form result that is much more quickly evaluated and provides a tight approximation. Simply replacing the PLP-PPP by a two-dimensional PPP produces poor results, especially in the high-reliability regime (where the SIR ccdf is near 1). Similarly, considering only the one street that the typical vehicle lies on (i.e., using only a one-dimensional PPP) ignores all the interference from the vehicles on the other streets, which strongly affects the tail of the distribution.

How about a combination of the two – a superposition of a one-dimensional PPP for the typical vehicle’s street and a two-dimensional PPP for the other vehicles? In this approach, PPPs of two different dimensions are combined to a transdimensional PPP (TPPP). It accurately characterizes the interference from nearby vehicles, which are likely to lie on the same street as the typical vehicle, and captures the remaining interference without the complexity of the PLP. The three key advantages of this approach are:

  • The TPPP leads to closed-form results for the SIR ccdf that are asymptotically exact, both in the lower and upper tails (near 0 and near infinity).
  • The results are highly accurate over the entire range of the SIR ccdf, and they are obtained about 100,000 times faster than the exact results. Hence, if fast evaluation is key and a time limit of, say, one μs is specified, the transdimensional approach yields more accurate results than the exact expression. Put differently, the exact expression only leads to higher accuracy if ample computation time is available.
  • The simplicity of the TPPP extends to all moments of the conditional success probability, which greatly simplifies the calculation of the SIR meta distribution.

The TPPP approach is also applicable to other street systems, including the Poisson stick model (where streets are modeled as line segments of random length) and the Poisson lilypond model, which forms T-junctions (where vehicles are of order 3). For the stick model with independent lengths, the exact expression of the nearest-neighbor distance distribution involves six nested integrals, hence a transdimensional is certainly warranted. More details can be found here.

Averages, distributions, and meta distributions

In this post I would like to show how meta distributions naturally emerge as an important extension of the concepts of averages and distributions. For a random variable Z, we call 𝔼(Z) its average (or mean). If we add a parameter z to compare Z against and form the family of random variables 1(Z>z), we call their mean the distribution of Z (to be precise, the complementary cumulative distribution function, ccdf for short).
Now, if Z does not depend on any other randomness, then 𝔼1(Z>z) gives the complete information about all statistics of Z, i.e., the probability of any event can be expressed by adding or subtracting these elementary probabilities.
However, if Z is a function of other sources of randomness, then 𝔼1(Z>z) does not reveal how the statistics of Z depend on those of the individual random elements. In general Z may depend on many, possibly infinitely many, random variables and random elements (e.g., point processes), such as the SIR in a wireless network. Let us focus on the case Z=f(X,Y), where X and Y are independent random variables. Then, to discern how X and Y individually affect Z, we need to add a second parameter, say x, to extend the distribution to the meta distribution:

\displaystyle \bar F_{[\![Z\mid Y]\!]}(z,x)=\mathbb{E}\mathbf{1}(\mathbb{E}[\mathbf{1}(Z>z) \mid Y]>x).

Alternatively,

\displaystyle \bar F_{[\![Z\mid Y]\!]}(z,x)=\mathbb{E}\mathbf{1}(\mathbb{E}_X\mathbf{1}(Z>z)>x).

Hence the meta distribution (MD) is defined by first conditioning on part of the randomness. It has two parameters, the distribution has one parameter, and the average has zero parameters. There is a natural progression from averages to distributions to meta distributions (and back), as illustrated in this figure:

Figure 1: Relationship between mean (average), ccdf (distribution), and MD (meta distribution).

From the top going down, we obtain more information about Z by adding indicators and parameters. Conversely, we can eliminate parameters by integration (taking averages). Letting U be the conditional ccdf given Y, i.e., U=𝔼X1(Z>z)=𝔼[1(Z>z) | Y], it is apparent that the distribution of Z is the average of U, while the MD is the distribution of U.

Let us consider the example Z=X/Y , where X is exponential with mean 1 and Y is exponential with mean 1/μ, independent of X. The ccdf of Z is

\displaystyle \bar F_{Z}(z)=\frac{\mu}{\mu+z}.

In this case, the mean 𝔼(Z) does not exist. The conditional ccdf given Y is the random variable

\displaystyle U=\bar F_{Z\mid Y}(z)=\mathbb{E}\mathbf{1}(Z>z\mid Y)=e^{-Yz},

and its distribution is the meta distribution

\displaystyle \bar F_{[\![Z\mid Y]\!]}(z,x)\!=\!\mathbb{P}(U\!>\!x)\!=\!\mathbb{P}(Y\!\leq\!-\log(x)/z)\!=\!1\!-\!x^{\mu/z}.

As expected, the ccdf of Z is retrieved by integration over x∈[0,1]. This MD has relevance in Poisson uplink cellular networks, where base stations (BSs) form a PPP Φ of intensity λ and the users are connected to the nearest BS. If the fading is Rayleigh fading and the path loss exponent is 2, the received power from a user at an arbitrary location is S=X/Y, where X is exponential with mean 1 and Y is exponential with mean 1/(λπ), exactly as in the example above. Hence the MD of the signal power S is

\displaystyle \qquad\qquad\qquad\bar F_{[\![S\mid \Phi]\!]}(z,x)=1-x^{\lambda\pi/z}.\qquad\qquad\qquad (1)

So what additional information do we get from the MD, compared to just the ccdf of S? Let us consider a realization of Φ and a set of users forming a lattice (any stationary point process of users would work) and determine each user’s individual probability that its received power exceeds 1:

Figure 2: Realization of a Poisson cellular network of density 1 where users (red crosses x) connect to the nearest base station (blue circles o). The number next to each user u is ℙ(Su>1 | Φ).

If we draw a histogram of all the user’s probabilities (the numbers in the figure), how does it look? This cannot be answered by merely looking at the ccdf of S. In fact ℙ(S>1)=π/(π+1)≈0.76 is merely the average of all the numbers. To know their distribution, we need to consult the MD. From (1) the MD (for λ=1 and z=1) is 1-xπ. Hence the histogram of the numbers has the form of the probability density function πxπ-1. In contrast, without the MD, we have no information about the disparity between the users. Their personal probabilities could all be well concentrated around 0.76, or some could have probabilities near 0 and others near 1. Put differently, only the MD can reveal the performance of user percentiles, such as the “5% user” performance, which is the performance that 95% of the users achieve but 5% do not.
This interpretation of the MD as a distribution over space for a fixed realization of the point process is valid whenever the point process is ergodic.

Another application of the MD is discussed in an earlier post on the fraction of reliable links in a network.