Author: Martin Haenggi

Path loss point processes

Naturally the locations of wireless transceivers are modeled as a point process on the plane or perhaps in the three-dimensional space. However, key quantities that determine the performance of a network do not directly nor exclusively depend on the locations but on the received powers. For instance, a typical SIR expression (at the origin) looks like

\displaystyle {\rm SIR}=\frac{P_y h_y \|y\|^{-\alpha}}{\sum_{x\in\Phi} P_x h_x \|x\|^{-\alpha}},

where y is the location of the intended transmitter and Φ is the point process of interferers. Px and hx are the transmit powers and fading coefficients of x, respectively. It is apparent that what matters are the distances raised to some power, not the locations themselves. So instead of working with Φ⊂ℝ2, we can focus on the one-dimensional process

\displaystyle \Psi\triangleq\{x\in\Phi\colon \|x\|^{\alpha}/h_x\},

called the path loss point process (PLPP) (with fading). The reason why the positive exponent α is preferred over -α is that otherwise the resulting point process is no longer locally finite (assuming Φ is stationary) since infinitely many points would fall in the interval [0,ε] for any ε>0. Transmit power levels could be included as displacements, either deterministically or randomly.

Path loss processes are particularly useful when Φ is a PPP. By the mapping and displacement theorems, the PLPPs are also PPPs whose intensity function is easy to calculate. For a stationary PPP Φ of intensity λ and iid fading, the intensity function of Ψ is

\displaystyle \mu(r)=\pi\lambda\delta r^{\delta-1}\mathbb{E}(h^\delta),\quad r\geq 0, \delta\triangleq2/\alpha.

where h is a generic fading random variable. If h has mean 1, then for δ<1, which is necessary to keep the interference finite, 𝔼(hδ)<1 from Jensen’s inequality, hence the effect of fading is a reduction of the intensity function by a fixed factor.

As an immediate application we observe that fading reduces the expected number of connected nodes, defined as those whose received power is above a certain threshold, by the δ-th moment of the fading coefficients.

More importantly, PLPPs lead to two key insights for Poisson cellular networks. Let us assume the elements of Ψ are ordered and denoted as ξ12<… . Then the SIR with instantaneously-strongest base station association (ISBA) is

\displaystyle {\rm SIR}=\frac{\xi_1^{-1}}{\sum_{i=2}^\infty \xi_i^{-1}}.

First, it is not hard to show that for ISBA with Rayleigh fading, the SIR distribution does not depend on the density of the underlying PPP. But since the effect of fading is but a scaling of the density, it follows that the SIR distribution does not depend on the fading statistics, either. In particular, the result for Rayleigh fading also applies to the non-fading case (where ISBA corresponds to nearest-base station association, NBA), which is often hard to analyze in stochastic geometry models.

Second, the intensity function of the PLPP also shows that the SIR performance of the heterogeneous independent Poisson (HIP) model is the same as that of the simple PPP model. The HIP model consists of an arbitrary number n of tiers of base stations, each modeled as an independent PPP of arbitrary densities λk and transmitting at arbitrary (deterministic) power levels Pk. The point process of inverse received powers (i.e., the PLPP with transmit powers included) from tier k has intensity

\displaystyle \mu_k(r)=\pi\lambda_k\delta r^{\delta-1}\mathbb{E}(h^\delta)P_k^\delta,\quad r\geq 0.

Since the superposition of n PPPs is again a PPP, the overall intensity is just the sum of the μk, which is still proportional to rδ-1. This shows that the SIR performance (with ISBA or NBA) of any HIP model is the same as that of just a single PPP.

For further reading, please refer to A Geometric Interpretation of Fading in Wireless Networks: Theory and Applications and The Performance of Successive Interference Cancellation in Random Wireless Networks.

The transdimensional approach

In vehicular networks, transceivers are inherently confined to a subset of the two-dimensional Euclidean space. This subset is the street system where cars are allowed to move. Accordingly, stochastic geometry models for vehicular networks usually consist of two components: A set of streets and a set of point processes, one for each street, representing the vehicles. The most popular model is the Poisson line process (PLP) for the streets, combined with one-dimensional PPPs of vehicles on each line (street).

This PLP-PPP model does not have T-junctions, only intersections. Accordingly, all vehicles are of order 2 or 4, per the taxonomy introduced here. The order is determined by the number of directions in which a vehicle can move.

The PLP-PPP is a Cox process due to the independent one-dim. PPPs, and the underlying street system determining the intensity measure (the line process) is also based on the PPP. Consequently, the PLP-PPP inherits a certain level of tractability from the PPP, in the sense that exact expressions can be derived for some quantities of interest. In particular, the SIR distribution (complementary cumulative distribution function, ccdf) at the typical vehicle for a transmitter at a fixed distance can be derived without difficulty. However, the expression requires the evaluation of two nested improper integrals. While such a result certainly has its value, it does not give direct insight how the resulting SIR distribution depends on the network parameters. Also, other metrics that depend on the SIR often require further integration, most importantly the SIR meta distribution, which is calculated from the higher moments of the conditional SIR ccdf (given the point process).

This raises the question whether it is possible to find a closed-form result that is much more quickly evaluated and provides a tight approximation. Simply replacing the PLP-PPP by a two-dimensional PPP produces poor results, especially in the high-reliability regime (where the SIR ccdf is near 1). Similarly, considering only the one street that the typical vehicle lies on (i.e., using only a one-dimensional PPP) ignores all the interference from the vehicles on the other streets, which strongly affects the tail of the distribution.

How about a combination of the two – a superposition of a one-dimensional PPP for the typical vehicle’s street and a two-dimensional PPP for the other vehicles? In this approach, PPPs of two different dimensions are combined to a transdimensional PPP (TPPP). It accurately characterizes the interference from nearby vehicles, which are likely to lie on the same street as the typical vehicle, and captures the remaining interference without the complexity of the PLP. The three key advantages of this approach are:

  • The TPPP leads to closed-form results for the SIR ccdf that are asymptotically exact, both in the lower and upper tails (near 0 and near infinity).
  • The results are highly accurate over the entire range of the SIR ccdf, and they are obtained about 100,000 times faster than the exact results. Hence, if fast evaluation is key and a time limit of, say, one μs is specified, the transdimensional approach yields more accurate results than the exact expression. Put differently, the exact expression only leads to higher accuracy if ample computation time is available.
  • The simplicity of the TPPP extends to all moments of the conditional success probability, which greatly simplifies the calculation of the SIR meta distribution.

The TPPP approach is also applicable to other street systems, including the Poisson stick model (where streets are modeled as line segments of random length) and the Poisson lilypond model, which forms T-junctions (where vehicles are of order 3). For the stick model with independent lengths, the exact expression of the nearest-neighbor distance distribution involves six nested integrals, hence a transdimensional is certainly warranted. More details can be found here.

To be connected or not to be

These days, “connectivity” is a very popular term in wireless networking. Related to 5G, typical statements include

  • “5G will be the main driver of wireless connectivity.”
  • “5G is designed to provide more connectivity.”
  • “5G provides 1 million connected devices per square km.”

There is also talk about “massive connectivity”, “poor connectivity”, “intermittent connectivity”, “high-speed connectivity”, “dense connectivity”, “sparse connectivity”, “ubiquitous connectivity”, “heterogeneous connectivity”, “hard connectivity”, “soft connectivity” etc. My favorite, though, is “connection-less connectivity”.

While everyone has a (vague) sense of what “connectivity” or “being connected” could mean in a wireless context, it is quite surprising to see that there is hardly any definition to be found in the literature. Being vague and call on some common sense is probably acceptable in media articles targeted at a general audience. However, in the technical journals, including the IEEE transactions, I would expect that this term would be rigorously defined. However, in the vast majority of articles, this is not the case; there are papers on IEEE Xplore that mention “connectivity” several dozen times but the authors never explain what they mean by it.

For instance, if the so-called “internet-of-things” (IoT) is claimed to soon “connect” billions of devices, does that mean that each device can communicate to each other one at a certain rate with a certain latency and a certain reliability? If yes, what are the rate, latency, and reliability? Or does it mean that over the course of a long period (say a day), they can all send a message to the wired (internet) backbone? Again, what is the reliability of that happening? Or does it mean that all the devices are capable (in principle) to establish a TCP connection to some server? Similarly, with one million “connected” devices per square km in 5G, what are they “connected” to? Each other, or a base station? At what rate/delay/reliability? It is clear that at the physical and link/MAC layers, any notion of “connectivity” would need to include probabilities (reliabilities), rates (throughput), and delay (latency). But such specifications are sorely missing in most of the literature. Further, extra attributes such as “massive”, “poor”, “ubiquitous” lack definitions also, and in view of half-duplex, channel access and other resource constraints, all connectivity is “intermittent”, rather than permanent.

At the transport layer, the situation is not clear, either. Two devices can be declared “connected” if a TCP connection has been established (although this does not guarantee that they can actually exchange messages in a given time). Conversely, two devices can successfully communicate without begin “connected” in the sense of the transport layer if they use a connection-less protocol (UDP). So at this level, being “connected” is neither sufficient nor necessary for communication.

At a higher level of abstraction, if a network is represented as a graph, there is a clear (mathematical) definition of what it means for the network to be connected. However, a (standard) graph is a model for a wired network, not a wireless one, for it does not account for fading, beamforming, power control, channel access, interference, and half-duplex constraints. Fading and rates could be incorporated in a weighted graph, half-duplex communication in a directed graph (digraph), and channel access in a dynamic (time-dependent) graph. Interference, however, is much more complicated to incorporate in a graph model since the success of a transmission may depend on a large set of interfering transmitters, their channel states, and their transmit powers. Also, if in a dynamic graph model a link (directed edge) from A to B exists at a certain time k and a link exists from B to C at time j, a path (or connection) from A to C is only formed if k<j.

So what is a meaningful graphical model for a wireless network based on which connectivity can be rigorously defined? Let us assume that a transmission succeeds (i.e., a link exists) if the SINR at the receiver exceeds some value θ that is determined based on the coding and modulation schemes. This model incorporates all the physical layer aspects mentioned above and, if made dynamic, channel access and other time-varying aspects.

Letting Φ denote the set of node locations (vertices), the SINR-based (geometric) digraph at time k has the directed edge set

\displaystyle \vec{E}_k=\{(x,y)\in\Phi^2\colon \mathbf{1}({\rm SINR}_{xy}>\theta)\},

SINRxy is the SINR at y when it attempts to receive from x at time k. The SINR condition implies that for an edge to form, x is transmitting at time k while y is not (unless y is full-duplex-capable). Then

\displaystyle G_n\left(\Phi,\bigcup_{k=1}^n \vec{E}_k\right),

is a directed multigraph (multiple edges are allowed between two vertices) that captures the entire history of successful transmissions in the network up to time n. It may be called the space-time SINR multigraph at time n. Figure 1 shows movie of the evolution of a network with 36 nodes that are transmitting independently with probability 1/4 in each time slot (slotted ALOHA).

Fig. 1. Example of space-time SIR multigraph with θ=3, path loss exponent 4, no noise, and Rayleigh fading. Filled circles indicate transmitters. Edges get thicker each time their link succeeds, and they turn red when bidirectionally is first achieved.

Figure 2 shows a larger network of the same type, with 400 nodes.

Fig. 2. Same as Figure 1 but with 400 nodes.

This graph reveals how many nodes can be reached from a given node within a certain time, or how many other nodes a node can receive a message from. Information in the network propagates along causal paths, i.e., paths where the first link is established before the second before the third, etc. To simplify the identification of such paths, the time index when an edge is established can be added as an edge weight.

Based on this graph, notions of percolation and connectivity can be rigorously defined. For connectivity, a natural definition is that the network is connected if causal paths exist between all pairs of nodes. A fairly general result can be proven without much difficulty: For arbitrary deterministic Φ∈ℝ2, ALOHA with transmit probability 0<p<1, a path loss exponent greater than 2, the graph G is almost surely connected if the (independent) fading variables have infinite support, irrespective of the noise level.

When an analysis for a deterministic set of locations Φ seems hard, randomizing it to a point process may improve the tractability. A good starting point, as usual, is the PPP. For the PPP, one can hope to answer questions such as:

  • How long does it take on average for a message to propagate from node x to node y (first-passage percolation)? Here x and y are deterministically added to the node set.
  • Under which condition is the average time for a node to reach any other node infinite? (If this average time is infinite, the node could be declared isolated.)
  • Is the propagation speed, defined as the time it takes for information to travel from x to y normalized by their distance, zero or positive asymptotically as the distance grows to infinity?

Based on these results, parameters such as the transmit probability can be optimized.

Wishful thinking

Today we are listening in to a conversation between Achill and the Turtle.

Achill: I have been conducting research on the performance of wireless links for a while now, and I learnt that analyzing a fixed deterministic channel does not lead to insightful and general results. To capture a variety of channel conditions and obtain crisp analytical results, it is necessary to model the channel by a random process, even though physically there is no randomness in wireless propagation.

Turtle: Indeed. There are now families of channel models that are widely accepted, and it is mandated that researchers incorporate them in their published work. This way, the mean performance of a link (in terms of throughput, delay, and reliability) can be obtained by averaging over the likely channel conditions. In a more refined analysis, distributions of performance metrics are derived.

Achill: This is all good and nice, but lately I am trying to look beyond individual links and consider networks of wireless transceivers. In this case, the performance greatly depends on the distances between a receiver and its intended and interfering transmitters. But I don’t want to calculate results for a single fixed geometry – it is unwieldy and would apply only for those exact locations of transceivers. I know some people have randomized the propagation losses by assuming they are all iid across the network, but this would imply that all nodes have the same distance from all other nodes…

Turtle: …which would mean there can be at most d +1 nodes in a d -dimensional network.

Achill: Yes, and such a triangular or tetrahedral arrangement is very unlikely to occur. So unfortunately I have to resort to lengthy Monte Carlo simulations for my performance evaluations. If only there were analytical models, like the random processes I use for channel fading, that could characterize the network geometry…

Turtle: …plus a mathematical framework that would allow the derivation of analytical results, averaged over the likely network configurations. Or even reveal distributions of the quantities of interest. That would be extremely powerful and could lead to great new insights, much more so than simulations.

Achill: Very true. Too bad that this is just wishful thinking…

Turtle: Well, as a researcher it is important to keep an open mind.

Achill: Good point!

Randomness decreases correlation – does it?

Intuition may tell us that increasing the randomness in the system (e.g., by increasing the variance of some random variables relative to their mean) will decrease the correlation between some random quantities of interest. A prominent example is the interference or SIR in a wireless network measured at two locations or in two time slots.

Let us consider a simple example to explore whether this intuition is correct. We consider the two random variables XY1 and XY2, where Y1 and Y2 are iid exponential with mean 1 and X is Bernoulli with mean p, independent of the Yk. In this case, Pearson’s correlation coefficient is

\displaystyle \rho(p)=\frac{p-p^2}{2p-p^2}.

It is illustrated in Figure 1 below. The randomness in X, measured by the ratio of variance to mean, is 1-p . However, increasing the randomness monotonically increases the correlation. As p approaches 0, the correlation tends to its maximum of 1/2.

Figure 1: Correlation coefficient of XY1 and XY2 where X is Bernoulli(p) and Yk are iid exponential(1).

Next, let Y1 and Y2 be independent and Bernoulli with mean p and X gamma distributed with parameters m and 1/m, such that the mean of X is 1 and the variance 1/m. Again we focus on the correlation of the two products XY1 and XY2. In this case, the correlation coefficient is

\displaystyle \rho(p,m)=\frac{p^2}{p(1+m)-m p^2},

shown in Figure 2 below for different values of m. Again, we observe that increasing the randomness in X (decreasing m) increases the correlation for all p <1. For p =1, the correlation is 1 since both random variables equal X.

Figure 2: Correlation coefficient of XY1 and XY2 where X is gamma(m,1/m) and Yk are iid Bernoulli(p).

So is the relationship between randomness and correlation completely counter-intuitive? Not quite, but our intuition is probably skewed towards the case of independent randomness, as opposed to common randomness. In the second example, the randomness in Y1 and Y2 decreases with p, and the correlation coefficient increases with p, as expected. Here Y1 and Y2 are independent. In contrast, X is the common randomness. If its variance increases, the opposite happens – the randomness decreases.

In the wireless setting, the common randomness is often the point process of transceiver locations, while the independent randomness usually comprises the fading coefficients and the channel access indicators. One of the earliest results on correlations in wireless networks is the following: For transmitters forming a PPP, with each one being active independently with probability p in each time slot (slotted ALOHA) and independent Nakagami-m fading, the correlation coefficient of the interference measured at the same location in two different time slots is (see Cor. 2 in this paper)

\displaystyle \qquad\qquad\qquad\qquad\qquad\rho(p,m)=\frac{pm}{m+1}.\qquad\qquad (*)

Here the fading coefficients have the same gamma distribution as in the second example above. As expected, increasing the randomness in the channel access (decreasing p) and in the fading (decreasing m) both reduce the correlation. Conversely, setting p =1 and letting m → ∞, the correlation coefficient is 1. However, the correlation is induced by the PPP as the common randomness – if the node placement was deterministic, the correlation would be 0. In other words, the interference in different times slots is conditionally independent given the PPP. This conditional independence is exploited in the analysis of important metrics such as the local delay and the SIR meta distribution.

One last remark. The expression (*) shows that the correlation coefficient is simply the product of the transmit probability p and the Nakagami fading parameter m mapped to the (0,1) interval using the Möbius homeomorphic transform described here, which is m /(m+1). This shows a nice symmetry in the impact of channel access and fading.

Rayleigh fading and the PPP – part 2

The previous blog highlighted that the Rayleigh fading channel model and the Poisson deployment model are very similar in terms of their tractability and in how realistic they are. It turns out that Rayleigh fading and the PPP are the neutral cases of channel fading and node deployment, respectively, in the following sense:

  • For Rayleigh fading, the power fading coefficients are exponential random variables with mean 1, which implies that the ratio of mean and variance is 1. If the ratio is smaller (bigger variance), the fading is stronger. If the variance goes to 0, there is less and less fading.
  • For the PPP, the ratio of the mean number of points in a finite region to its variance is 1. If the ratio is larger than 1, the point process is sub-Poissonian, and if the ratio is less than 1, it is super-Poissonian.

Prominent examples of super-Poissonian point processes are clustered processes, where clusters of points are placed at the points of a stationary parent process, and Cox processes, which are PPPs with random intensity measures. Sub-Poissonian processes include hard-core processes (e.g., lattices or Matérn hard-core processes) and soft-core processes (e.g., the Ginibre point process or other determinantal point processes, or hard-core processes with perturbations).

There is no convenient family of point process where the entire range from lattice to extreme clustering can be covered by tuning a single parameter. In contrast, for fading, Nakagami-m fading represents such a family of models. The power fading coefficients are gamma distributed with parameters m and 1/m, i.e., the probability density function is

\displaystyle f(x)=\frac{m^m}{\Gamma(m)}x^{m-1}e^{-mx}

with variance is 1/m. The case m =1 is the neutral case (Rayleigh fading), while 0<m <1 is strong (super-Rayleigh) fading, and m >1 is weak (sub-Rayleigh) fading. The following table summarizes the different classes of fading and point process models. NND stands for the nearest-neighbor distance of the typical point.

fadingpoint process
rigidno fading (m → ∞)lattice (deterministic NND)
weakly randomm >1 (sub-Rayleigh)repulsive (sub-Poissonian)
neutralm =1 (Rayleigh)PPP
strongly randomm <1 (super-Rayleigh)clustered (super-Poissonian)
extremely randomm → 0clustered with mean NND → 0
(while maintaining density)

It is apparent that the Rayleigh-PPP model offers a good balance in the amount of randomness – not too weak and not too strong. Without specific knowledge on how large the variances in the channel coefficients and in the number of points in a region are, it is the natural default assumption. The other key reason why the combination of exponential (power) fading and the PPP is so symbiotic and popular is its tractability. It is enabled by two properties:

  • with Rayleigh fading in the desired link, the SIR distribution is given by the Laplace transform of the interference;
  • the Laplace transform, written as an expected product over the points process, has the form of a probability generating functional, which has a closed-form expression for the PPP.

The fading in the interfering channels can be arbitrary; what is essential for tractability is only the fading in the desired link.

Rayleigh fading and the PPP

When stochastic geometry applications in wireless networking were still in their infancy or youth, I was frequently asked “Do you believe in the PPP model?”. I usually answered with a counter-question:“Do you believe in the Rayleigh fading model?”. This “answer” was motivated by the high likelihood that the person asking
was

  • familiar with the idea of modeling the effects of multi-path propagation using Rayleigh fading;
  • found it not only acceptable but quite natural to use a model with obvious shortcomings and limitations, for the sake of analytical tractability and design insight.

It usually turned out that the person quickly realized that the apparent shortcomings of the PPP model are quite comparable to those of the Rayleigh fading model, and that, conversely, they both share a high level of tractability.

Surely if one can accept that wireless signals propagate along infinitely many paths of comparable propagation loss with independent phases, resulting in a random received power with infinite support, one can accept a point process model with infinitely many points that are, loosely speaking, independently placed. If one can accept that at 0 dBm transmit power, there is a positive probability that the power received over a 1 km distance exceeds 90 dBm (1 MW), then surely one can accept that there is a positive probability that two points are separated by only 1 cm.

So why is it that Rayleigh fading was (and perhaps still is) more acceptable than the PPP? Is it just that Rayleigh fading has been used for wireless channel modeling for much longer than the PPP? Perhaps. But maybe part of the answer lies in what prompts us to use stochastic models in the first place.

Fundamentally there is no randomness in wireless propagation. If we know the characteristics of the antennas and the locations and properties of all objects, we can calculate the channel parameters exactly (say by raytracing) – and if there is no mobility, the channel stays fixed forever. So why introduce randomness where there is none? There are two reasons:

  • Raytracing is computationally expensive
  • The results obtained only apply to one very specific scenario. If a piece of furniture is moved a bit, we need to start from scratch.

Often the goal is to design a communication architecture, but such design cannot be based on the layout of a specific room. So we need a model that captures the characteristics of the channels in many rooms in many buildings, but obtaining such a large data set would be very expensive, and it would be hard to derive any useful insight from it. In contrast, a random model offers simplicity and superior tractability.

Similarly, in a network of transceivers, we could in principle assume that all their locations (and mobility vectors) are known, plus their transmit powers. Then, together with the (deterministic) channels, the interference power would be a deterministic quantity. This is very impractical and, as above, we do not want to decide on the standards for 7G cellular networks based on a given set of base station and user (and pet and vacuum robot and toaster and cactus) locations. Instead we aim for the robust design that a random spatial model (i.e., a point process) offers.

Another aspect here is that the channel fading process is often perceived (and modeled) as a random process in time. Although any temporal change in the channel is but a consequence of a spatial change, it is convenient to disregard the purely spatial nature of fading and assume it to be temporal. Then we can apply the standard machinery for temporal random processes in the performance analysis of a link. This includes, in particular, ergodicity, which conveniently allows us to argue that over some time period the performance will be close to that predicted by the ensemble average. The temporal form of ergodicity appears to be much more ingrained in our thinking than its spatial counterpart, which is at least as powerful: in an ergodic point process, the average performance of all links in each realization corresponds to that of the typical link (in the sense of the ensemble average). In the earlier days of stochastic geometry applications to wireless networks this key equivalence was not well understood – in particular by reviewers. Frequently they pointed out that the PPP model (or any point process model for that matter) is only relevant for networks with very high mobility, believing that only high mobility would justifiy the ensemble averaging. Luckily this is much less of an issue nowadays.

So far we have discussed Rayleigh fading and the PPP separately. The true strength of these simple models becomes apparent when they are combined into a wireless network model. In fact, most of the elegant closed-form stochastic geometry results for wireless networks are based on (or restricted to) this combination. There are several reason for this symbiotic relationship between the two models, which we will explore in a later post.

A case for T junctions

It has been established (for example, here) that the standard two-dimensional homogeneous PPP is not an adequate model for vehicular networks, since vehicles are mostly confined to streets. The Poisson line Cox process (PLCP) has naturally emerged as the model of choice. In this process, one-dimensional PPPs are placed on a street system formed by a Poisson line process. This model is somewhat tractable and thus has gained some traction in the community. With probability 1 each line (or street) intersects with each other line, so intersections are formed, and the communication performance at the typical intersection vehicles can be studied. This is important since vehicles at intersections are more accident-prone than other vehicles.

How about T junctions? Clearly, the PLCP has no T junctions a.s. But while not quite as frequent as (four-way) intersections, they are an important building block of the street systems in every city, and it is reasonable to assume that they inherit some of the dangers of intersections. However, the performance of vehicles at T junctions have barely been modeled and analyzed. The reason is perhaps not that it is not worthy of study but the lack of a natural model. Let’s say we wanted to construct a Cox model of vehicles that is supported on a street system that has no intersections but only T junctions, with the T junctions themselves forming a stationary point process (in the same way the intersections in the PLCP form a stationary point process). What is the simplest (most natural, most tractable) model?

One model we came up with is inspired by the so-called lilypond model. From each point of a PPP, a line segment grows in a random orientation in both directions. All segments grow at the same speed until one of their endpoints hit another segment. Once all growth has stopped, the lilypond street model is obtained. Here is a realization:

Figure 1. Realization of Lilypond street model, starting with a PPP of density 0.1.

Then PPPs of vehicles can be placed on each line segment to form a Lilypond line segment Cox process. Some results for vehicular networks based on this model are available here. The model has the advantage that it has only a single parameter – the density of the underlying PPP of the center points of each line segment. On the other hand, the distribution of the length of the line segments can only be bounded, and the construction naturally creates a dependence between the lengths of nearby segments, which limits the tractability. For instance, in a region with many initial Poisson points, segments will be short on average, while in a region with sparse Poisson points, segments will be long. Also, the construction implies that simulating this process takes significantly more time than simulating a PLCP.

Given the shortcoming of the model, it seems quite probable that there are other, simpler and (even) more natural models for street systems with T junctions. Let’s try and find them!

The typical user and her malfunctioning base station

Let us consider a cellular network with Poisson distributed base stations (BSs). We assume that in each Voronoi cell, one user is located uniformly at random (and independently across cells), and, naturally, the user is connected to the nucleus of that cell. This is the user point process of type I defined in this article. In this case, the typical user, named Alice, does reside in the typical cell since there is no size-biased sampling involved in defining the typical user. The downlink SIR performance of this network has been analyzed here (SIR distribution) and here (SIR meta distribution).

Suddenly and unfortunately, Alice’s serving BS is malfunctioning. Her downlink is, well, down, and she gets reconnected to the next-nearest one. How does that affect her SIR performance?
In another network, also with Poisson BSs, lives another type of typical user, namely that of a stationary point process of users that is independent of the base station process. This typical user’s name is Bob. Bob’s SIR performance is the same as the one measured at an arbitrary deterministic location on the plane, as discussed in this post. He resides in the 0-cell, not the typical cell. So in his case, the typical user does not reside in the typical cell.

Noticing that Alice’s original BS ceased to operate, Bob says: “I think now your SIR performance is the same as mine. After all, your cell was formed by adding a BS at the origin, while my network has no such BS. With that added BS removed, we are in the same situation.

Alice responds: “You may have a point, but I am not sure that my location is uniform in the 0-cell, as yours is.”

Bob: “Good point, but wouldn’t that be the natural conjecture?

Alice: “I am not sure. How about we verify? Let’s look at the distances.”

Bob: “Ok. For me, if Bn is the distance to my n-th nearest BS, we have

\displaystyle \pi\mathbb{E}(B_n^2)=n.

Alice: “For me, the distance A1 to the malfunctioning BS satisfies π𝔼(A12)≈10/13, by the properties of the typical cell. If you are correct, then the distribution of A2 should be the same as that of B1. But that’s not the case, see this figure.

Mean squared distances times π. For Alice, the distances are An, for Bob, Bn.

Bob: “I see – your new serving BS at distance A2 is quite a bit further away than mine at B1. So my conjecture was wrong.

Alice: “Yes, but the question of how resilient a cellular network is to BS outages is an interesting one. How about we compare the SIR performance with and without BS outage in different networks, say Poisson networks and lattice networks? I bet Poisson networks are more robust, in the sense that the downlink SIR statistics change less when there is an outage and users need to be handed off to the next-nearest BS.

Bob: “Hmm… that would make sense. But would that mean we should build clustered networks, to achieve even higher robustness?

Alice: “Possibly – if all we worry about is a small loss when a user is offloaded. But we should take into account the absolute performance also, and clustered networks are worse in this regard. If the starting performance is much higher, it is acceptable to have a somewhat bigger loss due to outage and handover.

Bob: “Makes sense. Sorry, I have to go. My SIR is so high that I just got a phone call.