Month: September 2021

The transdimensional approach

In vehicular networks, transceivers are inherently confined to a subset of the two-dimensional Euclidean space. This subset is the street system where cars are allowed to move. Accordingly, stochastic geometry models for vehicular networks usually consist of two components: A set of streets and a set of point processes, one for each street, representing the vehicles. The most popular model is the Poisson line process (PLP) for the streets, combined with one-dimensional PPPs of vehicles on each line (street).

This PLP-PPP model does not have T-junctions, only intersections. Accordingly, all vehicles are of order 2 or 4, per the taxonomy introduced here. The order is determined by the number of directions in which a vehicle can move.

The PLP-PPP is a Cox process due to the independent one-dim. PPPs, and the underlying street system determining the intensity measure (the line process) is also based on the PPP. Consequently, the PLP-PPP inherits a certain level of tractability from the PPP, in the sense that exact expressions can be derived for some quantities of interest. In particular, the SIR distribution (complementary cumulative distribution function, ccdf) at the typical vehicle for a transmitter at a fixed distance can be derived without difficulty. However, the expression requires the evaluation of two nested improper integrals. While such a result certainly has its value, it does not give direct insight how the resulting SIR distribution depends on the network parameters. Also, other metrics that depend on the SIR often require further integration, most importantly the SIR meta distribution, which is calculated from the higher moments of the conditional SIR ccdf (given the point process).

This raises the question whether it is possible to find a closed-form result that is much more quickly evaluated and provides a tight approximation. Simply replacing the PLP-PPP by a two-dimensional PPP produces poor results, especially in the high-reliability regime (where the SIR ccdf is near 1). Similarly, considering only the one street that the typical vehicle lies on (i.e., using only a one-dimensional PPP) ignores all the interference from the vehicles on the other streets, which strongly affects the tail of the distribution.

How about a combination of the two – a superposition of a one-dimensional PPP for the typical vehicle’s street and a two-dimensional PPP for the other vehicles? In this approach, PPPs of two different dimensions are combined to a transdimensional PPP (TPPP). It accurately characterizes the interference from nearby vehicles, which are likely to lie on the same street as the typical vehicle, and captures the remaining interference without the complexity of the PLP. The three key advantages of this approach are:

  • The TPPP leads to closed-form results for the SIR ccdf that are asymptotically exact, both in the lower and upper tails (near 0 and near infinity).
  • The results are highly accurate over the entire range of the SIR ccdf, and they are obtained about 100,000 times faster than the exact results. Hence, if fast evaluation is key and a time limit of, say, one μs is specified, the transdimensional approach yields more accurate results than the exact expression. Put differently, the exact expression only leads to higher accuracy if ample computation time is available.
  • The simplicity of the TPPP extends to all moments of the conditional success probability, which greatly simplifies the calculation of the SIR meta distribution.

The TPPP approach is also applicable to other street systems, including the Poisson stick model (where streets are modeled as line segments of random length) and the Poisson lilypond model, which forms T-junctions (where vehicles are of order 3). For the stick model with independent lengths, the exact expression of the nearest-neighbor distance distribution involves six nested integrals, hence a transdimensional is certainly warranted. More details can be found here.

To be connected or not to be

These days, “connectivity” is a very popular term in wireless networking. Related to 5G, typical statements include

  • “5G will be the main driver of wireless connectivity.”
  • “5G is designed to provide more connectivity.”
  • “5G provides 1 million connected devices per square km.”

There is also talk about “massive connectivity”, “poor connectivity”, “intermittent connectivity”, “high-speed connectivity”, “dense connectivity”, “sparse connectivity”, “ubiquitous connectivity”, “heterogeneous connectivity”, “hard connectivity”, “soft connectivity” etc. My favorite, though, is “connection-less connectivity”.

While everyone has a (vague) sense of what “connectivity” or “being connected” could mean in a wireless context, it is quite surprising to see that there is hardly any definition to be found in the literature. Being vague and call on some common sense is probably acceptable in media articles targeted at a general audience. However, in the technical journals, including the IEEE transactions, I would expect that this term would be rigorously defined. However, in the vast majority of articles, this is not the case; there are papers on IEEE Xplore that mention “connectivity” several dozen times but the authors never explain what they mean by it.

For instance, if the so-called “internet-of-things” (IoT) is claimed to soon “connect” billions of devices, does that mean that each device can communicate to each other one at a certain rate with a certain latency and a certain reliability? If yes, what are the rate, latency, and reliability? Or does it mean that over the course of a long period (say a day), they can all send a message to the wired (internet) backbone? Again, what is the reliability of that happening? Or does it mean that all the devices are capable (in principle) to establish a TCP connection to some server? Similarly, with one million “connected” devices per square km in 5G, what are they “connected” to? Each other, or a base station? At what rate/delay/reliability? It is clear that at the physical and link/MAC layers, any notion of “connectivity” would need to include probabilities (reliabilities), rates (throughput), and delay (latency). But such specifications are sorely missing in most of the literature. Further, extra attributes such as “massive”, “poor”, “ubiquitous” lack definitions also, and in view of half-duplex, channel access and other resource constraints, all connectivity is “intermittent”, rather than permanent.

At the transport layer, the situation is not clear, either. Two devices can be declared “connected” if a TCP connection has been established (although this does not guarantee that they can actually exchange messages in a given time). Conversely, two devices can successfully communicate without begin “connected” in the sense of the transport layer if they use a connection-less protocol (UDP). So at this level, being “connected” is neither sufficient nor necessary for communication.

At a higher level of abstraction, if a network is represented as a graph, there is a clear (mathematical) definition of what it means for the network to be connected. However, a (standard) graph is a model for a wired network, not a wireless one, for it does not account for fading, beamforming, power control, channel access, interference, and half-duplex constraints. Fading and rates could be incorporated in a weighted graph, half-duplex communication in a directed graph (digraph), and channel access in a dynamic (time-dependent) graph. Interference, however, is much more complicated to incorporate in a graph model since the success of a transmission may depend on a large set of interfering transmitters, their channel states, and their transmit powers. Also, if in a dynamic graph model a link (directed edge) from A to B exists at a certain time k and a link exists from B to C at time j, a path (or connection) from A to C is only formed if k<j.

So what is a meaningful graphical model for a wireless network based on which connectivity can be rigorously defined? Let us assume that a transmission succeeds (i.e., a link exists) if the SINR at the receiver exceeds some value θ that is determined based on the coding and modulation schemes. This model incorporates all the physical layer aspects mentioned above and, if made dynamic, channel access and other time-varying aspects.

Letting Φ denote the set of node locations (vertices), the SINR-based (geometric) digraph at time k has the directed edge set

\displaystyle \vec{E}_k=\{(x,y)\in\Phi^2\colon \mathbf{1}({\rm SINR}_{xy}>\theta)\},

SINRxy is the SINR at y when it attempts to receive from x at time k. The SINR condition implies that for an edge to form, x is transmitting at time k while y is not (unless y is full-duplex-capable). Then

\displaystyle G_n\left(\Phi,\bigcup_{k=1}^n \vec{E}_k\right),

is a directed multigraph (multiple edges are allowed between two vertices) that captures the entire history of successful transmissions in the network up to time n. It may be called the space-time SINR multigraph at time n. Figure 1 shows movie of the evolution of a network with 36 nodes that are transmitting independently with probability 1/4 in each time slot (slotted ALOHA).

Fig. 1. Example of space-time SIR multigraph with θ=3, path loss exponent 4, no noise, and Rayleigh fading. Filled circles indicate transmitters. Edges get thicker each time their link succeeds, and they turn red when bidirectionally is first achieved.

Figure 2 shows a larger network of the same type, with 400 nodes.

Fig. 2. Same as Figure 1 but with 400 nodes.

This graph reveals how many nodes can be reached from a given node within a certain time, or how many other nodes a node can receive a message from. Information in the network propagates along causal paths, i.e., paths where the first link is established before the second before the third, etc. To simplify the identification of such paths, the time index when an edge is established can be added as an edge weight.

Based on this graph, notions of percolation and connectivity can be rigorously defined. For connectivity, a natural definition is that the network is connected if causal paths exist between all pairs of nodes. A fairly general result can be proven without much difficulty: For arbitrary deterministic Φ∈ℝ2, ALOHA with transmit probability 0<p<1, a path loss exponent greater than 2, the graph G is almost surely connected if the (independent) fading variables have infinite support, irrespective of the noise level.

When an analysis for a deterministic set of locations Φ seems hard, randomizing it to a point process may improve the tractability. A good starting point, as usual, is the PPP. For the PPP, one can hope to answer questions such as:

  • How long does it take on average for a message to propagate from node x to node y (first-passage percolation)? Here x and y are deterministically added to the node set.
  • Under which condition is the average time for a node to reach any other node infinite? (If this average time is infinite, the node could be declared isolated.)
  • Is the propagation speed, defined as the time it takes for information to travel from x to y normalized by their distance, zero or positive asymptotically as the distance grows to infinity?

Based on these results, parameters such as the transmit probability can be optimized.