Double-proving by simulation?

Let us consider a hypothetical scenario that illustrates an issue I frequently observe.


Author: Here is an important result for a canonical Poisson bipolar network:
Theorem: The complementary cumulative distribution of the SIR in a Poisson bipolar network with Rayleigh fading, transmitter density λ, link distance r, and path loss exponent 2/δ is

\displaystyle \bar F(\theta)=\exp(-\lambda\pi r^2 \theta^\delta\Gamma(1-\delta)\Gamma(1+\delta)).

Proof: [Gives proof based on the probability generating functional.]

Reviewer: This is a nice result, but it is not validated by simulation. Please provide simulation results.


We have a proven exact analytical (PEA) result. So why would we need a simulation for “validation”? Where does the lack of trust in proofs come from? I am puzzled by these requests by reviewers. Similar issues arise when authors themselves feel the need to add simulations to the visualization of PEA results.
Perhaps some reviewers are not familiar with the analytical tools used or they think it is easier to have a quick look at a simulated curve rather than checking a proof. Perhaps some authors are not entirely sure their proofs are valid, or they think reviewers are more likely to trust the proofs if simulations are also shown.
The key issue is that such requests by reviewers or simulations by authors take simulation results as the “ground truth”, while portraying PEA results as weaker statements that need validation. This of course is not the case. A PEA result expresses a mathematical fact and thus does not need any further “corroboration”.
Now, if the simulation results are accurate, the analytical and simulated curves lie exactly on top of each other, and the accompanying text states the obvious: “Look, the curves match!”. But what if there isn’t an exact match between the analytical and the simulated curve? Which means that the simulation is not accurate. Certainly that does not qualify as “validation”. The worst conclusion would be to distrust the PEA result and take the simulation as the true result.
By its nature, a simulation is always restricted to a small cross-section of the parameter space. Even the simple result above has four parameters, which would make it hard to comprehensively simulate the network. Related, I am inviting the reader to simulate the result for a path loss exponent α=2.1 or δ=0.95. Almost surely the simulated curve will look quite different from the analytical one.
In conclusion, there is absolutely no need for “two-step verification” of PEA results. On the contrary.

7 thoughts on “Double-proving by simulation?

  1. It makes sense to support only those analytical results that are approximations or bounds by simulations. On a similar note, for numerical or simulation results, one sees that the authors are often asked to mention the unit of a quantity, e.g., meters for the distance. But many analytical results are scale-invariant, e.g., in the intensity of the point process. A post on this topic will be nice 🙂

    Like

  2. The problem is with gathering quality reviews and a lack of freedom for editors to by-pass red-tape.
    While stochastic geometry has seen quite a lot of interest from the community, we often see system models whereby the definition point process does not meet singularity, stationarity conditions, etc. However, results from the PPP are still being applied without explicit justification. In those conditions, PPP results are rather approximations and therefore make sense to validate against simulation (when done carefully).

    Like

    1. You are talking about a different issue, which is that of validating models. It is certainly important to validate models against real data, or compare simplified models against models that are known to be highly accurate. My post, however, is about validating PEA results. Here the assumption is that the result is exact, which implies that to “validate” it, you would use the very same model. So in my toy example, the reviewer asks the author to simulate the PPP model to “verify” the result, rather than to verify the PPP model against actual node locations.
      Secondly, I am not sure what you mean by “red tape”. When I was editor-in-chief of the Trans. on Wireless Comm., I discussed this issue with my editorial board, and I requested that they tell authors that they should ignore reviewers’ comments that ask for “validation” of PEA results; instead, editors should reinforce the comments where reviewers ask to remove such unnecessary simulation results and, if no reviewer notices, they should themselves ask the authors to remove them.

      Liked by 1 person

      1. Yes, I was talking more on why how this thought process of validating PEA results is developing due to lack of differentiation of the point you are making here.
        Unfortunately, in my limited editing experience, not all EiCs will provide GEs with the authority to over-rule vague comments made by the reviewers. Also if GEs have to get into a fight to justify their decisions (based on a technicality and not personal preference) on each paper it becomes quite stressful :).

        Like

      2. Any editorial position should come with such authority. Otherwise what do you do if one reviewer contradicts another reviewer? And it’s not about over-ruling but adhering to the journal’s policy. Most journals I am aware of ask authors to be concise, to not add unnecessary material to their paper. A simulation of a PEA result falls squarely in this category – it does not add anything to the paper (on the contrary, it the PEA and simulated curves don’t match, it opens a can of worms the authors need to address). Lastly, this usually does not raise to the point where it changes an editor’s decision. A paper isn’t rejected solely because the authors included unnecessary simulation results; instead, the authors are given a chance to remove them in a revision. In doing so, they may even save some money (page charges).

        Liked by 1 person

  3. Completely agree with you on all points. More than often with growth in the number of papers editor may not be the subject expert so asking for PEA results that have differences between simulation and analysis can lead to a change in decision. Should not these types of issues be also highlighted in the relevant transactions for generating awareness more broadly in the community? A short note like this in under a different section, e.g. comments will be really beneficial for many researchers. Also, sometimes you need to report what does not work rather than what works and a brief note is a perfect way of doing so.

    Like

Leave a comment