Tag: interference

Q cell analysis

Coverage is a key metric in wireless networks but usually only analyzed in terms of the covered area fraction, which is a scalar in [0,1]. Here we discuss a method to analytically bound the coverage manifold 𝒞, formed by the locations y on the plane where users are covered in the sense that the probability that the signal-to-interference ratio (SIR) exceeds a threshold θ is at least u (as argued here, this is a sensible definition of coverage):

\displaystyle\mathcal{C}\triangleq\{y\in\mathbb{R}^2\colon \mathbb{P}(\text{SIR}(y)>\theta)>u\}.

SIR(y) is the SIR measured at y for a given set of transmitters 𝒳={x1,x2,…} where the nearest transmitter provides the desired signal while all others interfere. It is given by

\displaystyle\text{SIR}(y)=\frac{h_1r_1^{-\alpha}}{\sum_{i>1}h_ir_i^{-\alpha}}=\frac{1}{\sum_{i>1}\frac{1}{H_i}\left(\frac{r_1}{r_i}\right)^{\alpha}},

where ri is the distance to the i-th nearest transmitter to y and the coefficients Hi=h1/hi are identically distributed with cdf FH. The coverage cell when x is the desired transmitter is denoted as Cx, so 𝒞 is the union of all Cx, x∈𝒳. To find an outer bound to the Cx, we are defining a cell consisting of locations at which there is no single interferer that causes them to be uncovered. In that cell, which we call the Q cell Qx, the ratio of the distance to the nearest interferer to the distance of the serving transmitter has to be larger than a suitably chosen parameter ρ, i.e.,

\displaystyle Q_x(\rho)\triangleq\left\{y\in\mathbb{R}^2\colon \frac{\|\mathcal{X}\setminus\{x\}-y\|}{\|x-y\|}>\rho\right\}.

The numerator is the distance to the nearest interferer to y. The shape of the Q cell is governed by a first key insight (which is 2200 years old):
Key Insight 1: Given two points, the locations where the ratio of the distances to the two points equals ρ form a circle whose center and radius depend on the two points and ρ.
For ρ>1, which is the regime of practical interest, the Q cell is the intersection of a number of disks, one for each interferer. For the more distant interferers, the radii of the disks get large that they are not restricting the Q cell. As a result, a Q cell consists of either just one disk, the intersection of two disks, or a polygon plus a small number of disk segments. In the example in Figure 1, only the four disks with black boundaries are relevant. Their centers and radii are given by the locations of the four strongest interferers, marked by red crosses.

Fig. 1. The boundary of the Q cell for ρ=2 of the transmitter at the origin is shown in red. It is formed by four black circles. The square is colored according to ℙ(SIR>1) for Rayleigh fading.

The remaining question is the connection between the distance ratio parameter ρ and the quality-of-service (QoS) parameters θ and u. It is given as

\displaystyle \rho=\sigma(\theta,u)^{1/\alpha},

where

\displaystyle \sigma(\theta,u)\triangleq\frac{\theta}{F_H^{-1}(1-u)}

is called the stringency and FH-1 is the quantile function of H (inverse of FH). For Rayleigh fading,

\displaystyle \sigma(\theta,u)=\theta\frac{u}{1-u}.

Figure 1 shows a Q cell for θ=1 and u=0.8, hence (with Rayleigh fading) ρα=4, and ρ=2 for α=4. It is apparent from the coloring that outside the Q cell, the reliability is strictly smaller than 0.8.
The second key important insight is that the Q cells only depend on ρ, so there is no need to explore the three-dimensional parameter space (θ, u, α) in analysis or visualization.
Key Insight 2: The Q cells are the same for all combinations of θ, u, and α that have the same ρ.
Accordingly, Figure 2, which shows Q cells for four values of ρ, encapsulates many combinations of θ, u, and α.

Fig. 2. Q cells for ρ=1.25, 1.50, 2.00, 2.50 for 25 transmitters.

If ρ=1, the Q cells equal the Voronoi cells, and for ρ<1, they span the entire plane minus an exclusion disk for each interferer. This case is less relevant in practice since it corresponds to very lax QoS requirements (small rate and/or low reliability, resulting in a stringency less than 1).

Since CxQx for each transmitter x, the union of Q cells is an outer bound to the coverage manifold.

If 𝒳 is a stationary and ergodic point process, the area fraction of the coverage manifold corresponds to the meta distribution (MD) of the SIR, and the area fraction covered by the Q cells is an upper bound to the MD. For the Poisson point process (PPP), the Q cells cover an area fraction of ρ-2, and for any stationary point process of transmitters, they cover less than 4/(1+ρ)2. This implies that for any stationary point process of users, at least a fraction 1-4/(1+ρ)2 of users is uncovered. This shows how quickly users become uncovered if the stringency increases.

Details on the calculation of the stringency for different fading models and the proofs of the properties of the Q cells can be found here. This paper also shows a method to refine the Q cells by “cutting corners”, which is possible since the corners of the Q cells have two interferers are equal distance but only one is taken into account. Further, if the MD is known, the refined Q cells can be shrunk so that their covered area fraction matches the MD; this way, accurate estimates of the coverage manifolds are obtained. Figure 3 shows an example of refined and scaled refined Q cells where the transmitters form a perturbed (noisy) square lattice, in comparison with the exact coverage cells Cx.

Fig. 3. Refined Q cells (blue) and estimated coverage cells (yellow) for ρ=√2. The boundaries of the exact coverage cells are shown in red.

Due to the simple geometric construction, bounding property, and general applicability to all fading and path loss models, Q cell analysis is useful for rapid yet accurate explorations of base station placements, resource allocation schemes, and advanced transmission techniques including base station cooperation (CoMP) and non-orthogonal multiple access (NOMA).

Is the IRS interfering?

There is considerable interest in wireless networks equipped with intelligent reflecting surfaces (IRSs), aka reconfigurable intelligent surfaces or smart reflecting surfaces. While everyone agrees that IRSs can enhance the signal over a desired link, there are conflicting views about whether IRSs matched to a certain receiver causes interference at other receivers. The purpose of this blog is to clarify this point.

To this end, we focus on a downlink cellular setting where BSs form a stationary point process Φ and a user and a passive IRS with N elements are placed in each (Voronoi) cell. The exact distribution is irrelevant for our purposes. The desired signal at each user consists of the direct signal from its BS plus the intelligently reflected signal at its IRS. The SIR of a user at the origin served by the BS at x0 can be expressed as

\displaystyle \text{SIR}=\frac{G \ell(x_0)}{I},

where ℓ(x0) is the path loss from the serving BS and

\displaystyle G=\Big|g_0+\sqrt{\Delta}\sum_{i=1}^N g_{i,1}g_{i,2}\Big|^2.

Here the random variables g0 and gi capture the (amplitude) fading over the direct link and reflected (indirect) link respectively, and the triangle parameter Δ captures the distances in the BS-IRS-user triangle. For details, please see this paper. The pertinent question is what constitutes the interference I. If all BSs are active, their interference is

\displaystyle I_{\rm BS}=\sum_{x\in\Phi\setminus\{x_0\}} |g_x|^2\ell(x).

But how about the IRSs? This is where it becomes contentious. Some argue that the IRSs emit signals and thus cause extra interference that is not captured in this sum over the BSs only. This would mean that we need to add a sum over the IRSs of the form

\displaystyle Q=\sum_{z\in\Psi\setminus\{z_0\}} p_z |g_z|^2\ell(z),

where Ψ is the point process of IRSs and pz is the power emitted by the IRS located at z. To decide whether Q is real or fake, let us have a look at the underlying network model. With fading modeled as Rayleigh, it is understood that the propagation of all signals is subject to rich scattering, i.e., there is multi-path propagation. For the case without IRSs, the propagation of interfering signals is illustrated here:

Figure 1. Propagation in a rich scattering environment without IRSs.

Figure 1 shows the user at the origin, its serving BS (blue triangle), and the interfering BSs (red triangles). The small black lines indicate scattering and reflecting objects. Some of the paths from interfering BSs via scattering objects to the user are shown with 319 red lines.

Now, let’s add one IRS per interfering BS. The resulting propagation environment is shown in Figure 2.

Figure 2. Propagation in a rich scattering environment with IRSs.

From the perspective of the user at the origin, the IRSs are unmatched, which means they act just like any of the other scattering and reflecting objects. So the difference to the IRS-free case is that there is a tiny fraction of additional scatterers, as highlighted in Figure 3.

Figure 3. Same as Figure 2, but with the IRSs encircled.

In this case, the IRSs add 4% to the scattering objects. If we did raytracing, they could make a small difference. In analytical work where the fading model is based on the assumption of a large number of scatterers, with the number of propagation paths tending to infinity, they make no difference at all. So we conclude that there is no extra interference Q due to the presence of passive IRSs and thus I=IBS. The signals reflected at unmatched IRSs are no different from signals reflected at any other object, and multi-path fading models already incorporate all such reflections.

Another argument that IRSs cannot cause extra interference is based on the fundamental principle of energy conservation. If Q did exist, its mean would be proportional to the density of IRSs deployed. This implies that there would be a density of IRSs beyond which the “interference” from the IRSs exceeds the total power transmitted by all BSs, which obviously is not physically possible.

Signal-to-interference, reversed

Interference is the key performance-limiting factor in wireless networks. Due to the many unknown parts in a large network (transceiver locations, activity patterns, transmit power levels, fading), it is naturally modeled as a random variable, and the (only) theoretical tool to characterize its distribution is stochastic geometry. Accordingly, many stochastic geometry-based works focus on interference characterization, and some closed-form expressions have been obtained in the Poisson case.

If the path loss law exhibits a singularity at 0, such as the popular power-law r, the interference (power) may not have a finite mean if an interferer can be arbitrarily close to the receiver. For instance, if the interferers form an arbitrary stationary point process, the mean interference (at an arbitrary fixed location) is infinite irrespective of the path loss exponent. If α≤2, the interference is infinite in an almost sure sense.

This triggered questions about the validity of the singular path loss law and prompted some to argue that a bounded (capped) path loss law should be used, with α>2, to avoid such divergence of the mean. Of course the singular path loss law becomes unrealistic at some small distance, but is it really necessary to use a more complicated model that introduces a characteristic distance and destroys the elegant scale-free property of the singular (homogeneous) law?

The relevant question is to which extent the performance characterization of the wireless network suffers when using the singular model.

First, for practical densities, there are very few realizations where an interferer is within the near-field, and if it is, the link will be in outage irrespective of whether a bounded or singular model is used. This is because the performance is determined by the SIR, where the interference is in the denominator. Hence whether the interference is merely large or almost infinite makes no difference – for any reasonable threshold, the SIR will be too small for communication.
Second, there is nothing wrong with a distribution with infinite mean. While standard undergraduate and graduate-level courses rarely discuss such distributions, they are quite natural to handle and pose no significant extra difficulty.

That said, there is a quantity that is very useful when it has a finite mean: the interference-to-(average)-signal ratio ISR, defined as

\displaystyle {\rm ISR}=\frac{\sum_{x\in\Phi\setminus\{x_0\}} h_x \|x\|^{-\alpha}}{\|x_0\|^{-\alpha}},

where x0 is the desired transmitter and the other points of Φ are interferers. The hx are the fading random variables (assumed to have mean 1), only present in the numerator (interference), since the signal power here is averaged over the fading.
Taking the expectation of the ISR eliminates the fading, and we arrive at the mean ISR

\displaystyle {\rm MISR}=\mathbb{E} \sum_{x\in\Phi\setminus\{x_0\}}\left(\frac{\|x_0\|}{\|x\|}\right)^{\alpha},

which only depends on the network geometry. It follows that the SIR distribution is

\displaystyle \mathbb{P}({\rm SIR}<\theta)=\mathbb{P}(h<\theta\, {\rm ISR})=\mathbb{E} F_h(\theta\,{\rm ISR}),

where h is a generic fading random variable. If h is exponential (Rayleigh fading) and the MISR is finite,

\displaystyle F_h(x)\sim x\qquad\Longrightarrow\qquad \mathbb{P}({\rm SIR}<\theta)\sim \theta\,{\rm MISR}.

Hence for small θ, the outage probability is proportional to θ with proportionality factor MISR. This simple fact becomes powerful in conjunction with the observation that in cellular networks, the SIR distributions (in dB) are essentially just shifted versions of the basic SIR distribution of the PPP (and of each other).

Figure 1: Examples for SIR distributions in cellular networks that are essentially shifted versions of each other.

In Fig. 1, the blue curve is the standard SIR ccdf of the Poisson cellular network, the red one is that of the triangular lattice, which has the same shape but shifted by about 3 dB, with very little dependence on the path loss exponent. The other two curves may be obtained using base station silencing and cooperation, for instance. Since the shift is almost constant, it can be determined by calculating the ratios of the MISRs of the different deployments or schemes. The asymptotic gain relative to the standard Poisson network (as θ→0) is

\displaystyle G_0=\frac{\rm MISR_{PPP}}{{\rm MISR}},\quad \text{where }\; {\rm MISR_{PPP}}=\frac{2}{\alpha-2},\quad \alpha>2,

The MISR in this expression is the MISR for an alternative deployment or architecture. The MISR for the PPP is not hard to calculate. Extrapolating to the entire distribution by applying the gain everywhere, we have

\displaystyle \bar F_{\rm SIR}(\theta) \approx \bar F_{\rm SIR}^{\rm PPP}(\theta/G_0).

This approach of shifting a baseline SIR distribution was proposed here and here. It is surprisingly accurate (as long as the diversity order of the transmission scheme is unchanged), and it can be extended to other types of fading. Details can be found here.

Hence there are good reasons to focus on the reversed SIR, i.e., the ISR.