Tag: Poisson point process

# Rayleigh fading and the PPP – part 2

The previous blog highlighted that the Rayleigh fading channel model and the Poisson deployment model are very similar in terms of their tractability and in how realistic they are. It turns out that Rayleigh fading and the PPP are the neutral cases of channel fading and node deployment, respectively, in the following sense:

• For Rayleigh fading, the power fading coefficients are exponential random variables with mean 1, which implies that the ratio of mean and variance is 1. If the ratio is smaller (bigger variance), the fading is stronger. If the variance goes to 0, there is less and less fading.
• For the PPP, the ratio of the mean number of points in a finite region to its variance is 1. If the ratio is larger than 1, the point process is sub-Poissonian, and if the ratio is less than 1, it is super-Poissonian.

Prominent examples of super-Poissonian point processes are clustered processes, where clusters of points are placed at the points of a stationary parent process, and Cox processes, which are PPPs with random intensity measures. Sub-Poissonian processes include hard-core processes (e.g., lattices or Matérn hard-core processes) and soft-core processes (e.g., the Ginibre point process or other determinantal point processes, or hard-core processes with perturbations).

There is no convenient family of point process where the entire range from lattice to extreme clustering can be covered by tuning a single parameter. In contrast, for fading, Nakagami-m fading represents such a family of models. The power fading coefficients are gamma distributed with parameters m and 1/m, i.e., the probability density function is

$\displaystyle f(x)=\frac{m^m}{\Gamma(m)}x^{m-1}e^{-mx}$

with variance is 1/m. The case m =1 is the neutral case (Rayleigh fading), while 0<m <1 is strong (super-Rayleigh) fading, and m >1 is weak (sub-Rayleigh) fading. The following table summarizes the different classes of fading and point process models. NND stands for the nearest-neighbor distance of the typical point.

It is apparent that the Rayleigh-PPP model offers a good balance in the amount of randomness – not too weak and not too strong. Without specific knowledge on how large the variances in the channel coefficients and in the number of points in a region are, it is the natural default assumption. The other key reason why the combination of exponential (power) fading and the PPP is so symbiotic and popular is its tractability. It is enabled by two properties:

• with Rayleigh fading in the desired link, the SIR distribution is given by the Laplace transform of the interference;
• the Laplace transform, written as an expected product over the points process, has the form of a probability generating functional, which has a closed-form expression for the PPP.

The fading in the interfering channels can be arbitrary; what is essential for tractability is only the fading in the desired link.

# Rayleigh fading and the PPP

When stochastic geometry applications in wireless networking were still in their infancy or youth, I was frequently asked “Do you believe in the PPP model?”. I usually answered with a counter-question:“Do you believe in the Rayleigh fading model?”. This “answer” was motivated by the high likelihood that the person asking
was

• familiar with the idea of modeling the effects of multi-path propagation using Rayleigh fading;
• found it not only acceptable but quite natural to use a model with obvious shortcomings and limitations, for the sake of analytical tractability and design insight.

It usually turned out that the person quickly realized that the apparent shortcomings of the PPP model are quite comparable to those of the Rayleigh fading model, and that, conversely, they both share a high level of tractability.

Surely if one can accept that wireless signals propagate along infinitely many paths of comparable propagation loss with independent phases, resulting in a random received power with infinite support, one can accept a point process model with infinitely many points that are, loosely speaking, independently placed. If one can accept that at 0 dBm transmit power, there is a positive probability that the power received over a 1 km distance exceeds 90 dBm (1 MW), then surely one can accept that there is a positive probability that two points are separated by only 1 cm.

So why is it that Rayleigh fading was (and perhaps still is) more acceptable than the PPP? Is it just that Rayleigh fading has been used for wireless channel modeling for much longer than the PPP? Perhaps. But maybe part of the answer lies in what prompts us to use stochastic models in the first place.

Fundamentally there is no randomness in wireless propagation. If we know the characteristics of the antennas and the locations and properties of all objects, we can calculate the channel parameters exactly (say by raytracing) – and if there is no mobility, the channel stays fixed forever. So why introduce randomness where there is none? There are two reasons:

• Raytracing is computationally expensive
• The results obtained only apply to one very specific scenario. If a piece of furniture is moved a bit, we need to start from scratch.

Often the goal is to design a communication architecture, but such design cannot be based on the layout of a specific room. So we need a model that captures the characteristics of the channels in many rooms in many buildings, but obtaining such a large data set would be very expensive, and it would be hard to derive any useful insight from it. In contrast, a random model offers simplicity and superior tractability.

Similarly, in a network of transceivers, we could in principle assume that all their locations (and mobility vectors) are known, plus their transmit powers. Then, together with the (deterministic) channels, the interference power would be a deterministic quantity. This is very impractical and, as above, we do not want to decide on the standards for 7G cellular networks based on a given set of base station and user (and pet and vacuum robot and toaster and cactus) locations. Instead we aim for the robust design that a random spatial model (i.e., a point process) offers.

Another aspect here is that the channel fading process is often perceived (and modeled) as a random process in time. Although any temporal change in the channel is but a consequence of a spatial change, it is convenient to disregard the purely spatial nature of fading and assume it to be temporal. Then we can apply the standard machinery for temporal random processes in the performance analysis of a link. This includes, in particular, ergodicity, which conveniently allows us to argue that over some time period the performance will be close to that predicted by the ensemble average. The temporal form of ergodicity appears to be much more ingrained in our thinking than its spatial counterpart, which is at least as powerful: in an ergodic point process, the average performance of all links in each realization corresponds to that of the typical link (in the sense of the ensemble average). In the earlier days of stochastic geometry applications to wireless networks this key equivalence was not well understood – in particular by reviewers. Frequently they pointed out that the PPP model (or any point process model for that matter) is only relevant for networks with very high mobility, believing that only high mobility would justifiy the ensemble averaging. Luckily this is much less of an issue nowadays.

So far we have discussed Rayleigh fading and the PPP separately. The true strength of these simple models becomes apparent when they are combined into a wireless network model. In fact, most of the elegant closed-form stochastic geometry results for wireless networks are based on (or restricted to) this combination. There are several reason for this symbiotic relationship between the two models, which we will explore in a later post.

# The typical user and her malfunctioning base station

Let us consider a cellular network with Poisson distributed base stations (BSs). We assume that in each Voronoi cell, one user is located uniformly at random (and independently across cells), and, naturally, the user is connected to the nucleus of that cell. This is the user point process of type I defined in this article. In this case, the typical user, named Alice, does reside in the typical cell since there is no size-biased sampling involved in defining the typical user. The downlink SIR performance of this network has been analyzed here (SIR distribution) and here (SIR meta distribution).

Suddenly and unfortunately, Alice’s serving BS is malfunctioning. Her downlink is, well, down, and she gets reconnected to the next-nearest one. How does that affect her SIR performance?
In another network, also with Poisson BSs, lives another type of typical user, namely that of a stationary point process of users that is independent of the base station process. This typical user’s name is Bob. Bob’s SIR performance is the same as the one measured at an arbitrary deterministic location on the plane, as discussed in this post. He resides in the 0-cell, not the typical cell. So in his case, the typical user does not reside in the typical cell.

Noticing that Alice’s original BS ceased to operate, Bob says: “I think now your SIR performance is the same as mine. After all, your cell was formed by adding a BS at the origin, while my network has no such BS. With that added BS removed, we are in the same situation.

Alice responds: “You may have a point, but I am not sure that my location is uniform in the 0-cell, as yours is.”

Bob: “Good point, but wouldn’t that be the natural conjecture?

Alice: “I am not sure. How about we verify? Let’s look at the distances.”

Bob: “Ok. For me, if Bn is the distance to my n-th nearest BS, we have

$\displaystyle \pi\mathbb{E}(B_n^2)=n.$

Alice: “For me, the distance A1 to the malfunctioning BS satisfies π𝔼(A12)≈10/13, by the properties of the typical cell. If you are correct, then the distribution of A2 should be the same as that of B1. But that’s not the case, see this figure.

Bob: “I see – your new serving BS at distance A2 is quite a bit further away than mine at B1. So my conjecture was wrong.

Alice: “Yes, but the question of how resilient a cellular network is to BS outages is an interesting one. How about we compare the SIR performance with and without BS outage in different networks, say Poisson networks and lattice networks? I bet Poisson networks are more robust, in the sense that the downlink SIR statistics change less when there is an outage and users need to be handed off to the next-nearest BS.

Bob: “Hmm… that would make sense. But would that mean we should build clustered networks, to achieve even higher robustness?

Alice: “Possibly – if all we worry about is a small loss when a user is offloaded. But we should take into account the absolute performance also, and clustered networks are worse in this regard. If the starting performance is much higher, it is acceptable to have a somewhat bigger loss due to outage and handover.

Bob: “Makes sense. Sorry, I have to go. My SIR is so high that I just got a phone call.

# The curious shape of Poisson-Voronoi cells

In this blog we are exploring the shape of two kinds of cells in the Poisson-Voronoi tessellation on the plane, namely the 0-cell and the typical cell. The 0-cell is the cell containing the origin, while the typical cell is the cell obtained by conditioning on a Poisson point to be at the origin (which is the same as adding the origin to the PPP).

The cell shape has an important effect on the signal and interference powers at the typical user (in the 0-cell) and at the user in the typical cell. For instance, in the 0-cell, which contains the typical user at a uniformly random location, about 1/4 of the cell edge is at essentially the same distance to the base station as the typical user on average). Hence it is not the case that edge users necessarily suffer from larger signal attenuation than the typical user (who resides inside the cell).

The cell shape is determined by the directional radii of the cells when their nucleus is at the origin. To have a well-defined orientation, we select a location uniformly in the cell and rotate the cell so that this location falls on the positive x-axis. In the 0-cell, this involves first a translation of the cell’s nucleus to the origin, followed by a rotation until the original origin (which is uniformly distributed in the cell) lies on the positive x-axis. This is illustrated in Movie 1 below. In the typical cell, it involves adding a Poisson point, selecting a uniform location, and a rotation so that this uniform location lies on the positive x-axis. This is illustrated in Movie 2.

As indicated in the movies, the distances from the nucleus to the uniformly random location are denoted by D0 and D, respectively, and the directional radii by R0(ϕ) and R(ϕ), respectively. This way, the boundary of the cells is described in polar coordinates as (R0(ϕ),ϕ) and (R(ϕ),ϕ), ϕ ∈ [0,2π). In a cellular network model, the uniform random location could be that of a user, while the PPP models the base stations. In this case D0 is the link distance from the typical user to its serving base station, while D is the link distance from the typical base station to a randomly located user it serves. The distinction between the typical user’s and the typical base station’s point of view is explained in this blog.

Let λ denote the density of the PPP. Three results are well known:

• The distribution of D0 follows from the void probability of the PPP. It is Rayleigh with mean 1/(2√λ).
• Since the mean area of the typical cell is 1/λ, we have ∫ 0π 𝔼(R(ϕ)2) dϕ = 1/λ.
• The minimum of R(ϕ) is distributed with pdf f(r)=8λπr exp(-4λπr2). This is half the distance to the nearest neighboring Poisson point (base station).

In contrast, there is no closed-form expression for the distribution of D. Due to size-biased sampling, the area of the 0-cell stochastically dominates that of the typical cell and, in turn, D0 dominates D.

Analyzing the directional radii, we obtain these new insights on the cell shapes:

• If Ψ is uniform in [0,π], R(Ψ) is again Rayleigh with mean 1/(2√λ).
• R0(π) is also Rayleigh with the same mean. In fact, R0(π) and D0 are iid.
• R0(0) has mean 3/(4√λ) and is distributed as

$\displaystyle f_{R_0(0)}(y)=2(\lambda\pi)^2 y^3 \exp(-\lambda\pi y^2).$

• Hence R0(0) is on average exactly 50% larger than R0(π). For the typical cell, simulation results indicate that R(0) is about 55% larger on average than R(π).
• The difference R0(0)-D0 is distributed as f(r)=π√λ erfc(r √(πλ)). Its mean is 1/(4√λ). Hence the typical user is no further from the cell edge than the base station on average.
• The joint distribution of D0 and R0(ϕ) can be given in exact analytical form.
• 3/4 of the typical cell is further away from the nucleus than the nearest point on the cell edge (i.e., the minimum directional radius). Expressed differently, a uniformly random user in the typical cell has a 75% chance of being further away from the base station than the nearest edge user. By simulation, D on average is 2.7 times larger than the minimum of the directional radii.

In conclusion, the 0-cell and the typical cell are quite asymmetric around the nucleus (base station) and the uniformly random point (user). In the direction away from the base station, the user is about 4 times closer to the cell edge than in the direction towards the base station, and many locations on the cell edge are closer to the base station than the user inside the cell. These results have implications on the design of efficient cellular network transmission schemes, such as beamforming, NOMA, and base station cooperation, in both down- and uplink.

More details are available in Section II of this paper.