Simons Conference on Networks and Stochastic Geometry

 

Invited Talk Abstracts(Click to show/hide)

Steiner's formula and large deviations theory

Venkat Anantharam, UC Berkeley

The Shannon capacity of additive noise channels with transmitters whose codebooks are constrained by unconventional constraints, such as those that arise in energy harvesting communications, motivates the study of Steiner's formula for the volume of the parallel body of a convex body in high dimensions, in the Shannon regime. More precisely, one considers parallel bodies of a sequence of convex bodies, one in each dimension, where the volume scales exponentially with the dimension. A remarkable parallel with large deviations theory emerges in this regime. This parallel will be highlighted and the underlying applications that motivate this viewpoint will be discussed.
joint work with Varun Jog.

Click here for the presentation.

On a coverage problem in communications and its relations to Poisson-Dirichlet point processes

Bartlomiej Blaszczyszyn, Ecole Normale Superieure, France

The SINR coverage process was introduced in the stochastic geometric context in 2001 to model coverage phenomena in wireless communications.Since that time it received a lot of attention, both in engineering and applied probability community. It is called the shot-noise coverage process in the recent edition of the famous book on Stochastic Geometry and Its Applications. In our lecture, first we will briefly present a few historical (and perhaps historic) results regarding this model, in particular in respect of its typical cell coverage and percolation. Then, we will focus on some more recent developments related to the stationary coverage properties of the model. In particular, we will show relations to the so called two-parameter family of Poisson-Dirichlet point processes, which appear also in economics and physics in several other, apparently different, contexts.

Click here for the presentation of first session.
Click here for the presentation of second session.

The Stein-Dirichlet-Malliavin method and applications to stochastic geometry

Laurent Decreusefond , Telecom Paris Tech, France

We show how Malliavin calculus can enhance the general procedure of the Stein's method. The versatility of this new approach is illustrated on two examples from stochastic geometry: superposition of determinantal point processes and Poisson polytopes. This talk is based on arXiv:1207.3517, arXiv:1406.5484 and some new results still under development.

Click here for the presentation.

Modeling and Analysis of Clusters and Exclusion Zones in D2D Networks

Harpreet Dhillon, Virginia Tech, USA

Device-to-device (D2D) networks enable direct communication between proximate mobile devices, thereby improving spectral efficiency and offloading traffic from cellular networks. As is the case in any wireless network, accurate spatial modeling of D2D networks is necessary for a meaningful performance analysis. In this talk, we present a new comprehensive model for these networks with special emphasis on two spatial characteristics: (i) devices engaged in D2D communication typically exist in clusters, e.g., users in a coffee shop or a sports bar, and (ii) interference management may introduce exclusion zones in which D2D communication is not allowed. We first propose a new tractable approach to capturing exclusion zones in an otherwise homogeneous interference field. As an application of the proposed approach, we derive a meaningful bound on the Laplace transform of interference originating from a Poisson Hole Process. Now modeling D2D network as a Thomas cluster process, we derive easy-to-use expressions for both coverage probability and area spectral efficiency (ASE) while assuming that each device has content of interest available with another device in the same cluster. These results provide several key insights into the optimal content placement that are unique to clustered D2D networks. The talk is concluded by enriching the clustered D2D model with exclusion zones and studying their effect on the performance analysis.

When Base Stations Meet Mobile Terminals, and Some Results Beyond

Gerhard Fettweis and Vinay Suryaprakash

Wireless networks are designed based on population patterns. To densify networks, the organization of a cellular network is, therefore, done according to user density to minimize the cost of investment. Since populations do not appear in the form of a regular geometric grid, the planning of networks should not either. Instead, a two-dimensional Poisson process is believed to be a good match of reality. This has been the basis for groundbreaking work by various researchers showing how stochastic geometry based on independent Poisson point processes can be applied. However, to minimize cost and maximize capacity, cellular networks are densified wherever the user density is high. Hence, modelling users and base stations as independent Poisson processes does not match reality, but instead, generates "worst case " scenarios and therefore, weak lower bounds. Instead, the user and base station processes must reflect this correlation to be able to model reality suitably. In this talk, we shall describe our efforts towards realizing this goal. During our investigations, we find that using a Neyman-Scott process to model users clustered around base stations could be a viable alternative.

When densifying cellular networks even further, a hierarchy of cells is used, i.e. micro base stations are placed where capacity hot spots appear. Once again, correlating one process with the other (i.e. introducing a correlation among the base station processes) is pivotal for capturing reality accurately. To this end, we shall talk about our work on models using stationary Poisson cluster processes and how they can be used to study such networks.

Lastly, if time permits, we then will also talk about linear algebra and matrix theory; hopefully extending the view of what is known today.

Click here for the presentation.

Information without rolling dice

Massimo Franceschetti, University of California-San Diego, USA

The deterministic notions of capacity and entropy are studied in the context of communication and storage of information using square-integrable, bandlimited signals subject to perturbation. The (ε,δ)-capacity, that extends the Kolmogorov ε-capacity to packing sets in high dimensions with overlap at most δ, is introduced and compared to the Shannon capacity. The functional form of the results indicates that in both Kolmogorov and Shannon's settings, capacity and entropy grow linearly with the number of degrees of freedom,but only logarithmically with the signal to noise ratio. This insight transcends the details of the stochastic or deterministic description of the information-theoretic model.

For δ=0, we provide new bounds on the Kolmogorov ε-capacity, and a tight asymptotic expression of the Kolmogorov ε-entropy of bandlimited signals. We also introduce a deterministic notion of error exponent, answering a question originally posed to the speaker by Francois Bacelli.
This is joint work with Taehyung J. Lim.

Click here for the presentation.

Asymptotics and Meta-Distribution of the Signal-to-Interference Ratio in Wireless Networks

Martin Haenggi, University of Notre Dame, USA

"The signal-to-interference ratio (SIR) is a key quantity in wireless networks that determines the link reliability, rate, and delay. Its distribution function is usually interpreted as the outage probability of a link, and its complement is the success probability.

In the first part of this lecture, we focus on Poisson bipolar networks and determine the joint outage probability of multiple transmissions over the typical link. To obtain more fine-grained information about the individual links, we then find the distribution of the conditional success probability given the point process, which we call meta-distribution. It determines the fraction of links that achieve a target SIR with at least a certain probability.

In the second part, we turn to cellular networks, where users are connected to the base station providing the strongest signal while all others interfere. After reviewing known results for the case of Poisson distributed base stations, including multi-tier models, we will discuss an approximation technique called ASAPPP for general stationary base station point processes. ASAPPP stands for "Approximate SIR Analysis based on the Poisson point process ". It is based on the fact that the SIR distributions for different point process are essentially just horizontally shifted versions of each other. Remarkably, this gap between the distributions is almost constant over the entire distribution and barely depends on the fading statistics or the path loss exponents, which shows that it is mainly a geometric quantity. In the derivations, a new type of point process, the so-called relative distance process, proves useful.
The second part is based on joint work with Radha Krishna Ganti (IIT Madras).

Click here for the presentation of first session.
Click here for the presentation of second session.

Stochastic geometric analysis of massive MIMO networks

Robert Heath, University of Texas at Austin, USA

Cellular communication systems have proven to be a fertile ground for the application of stochastic geometry. This talk considers performance analysis in cellular systems that employ what is known as massive multiple-input multiple-out (MIMO) communication. With massive MIMO, each base station exploits a large number of antennas and uses them to serve tens of users. Massive MIMO is being studied extensively as part of fifth generation cellular networks. This presentation applies stochastic geometry to analyze the system performance of massive MIMO networks in two different configurations: uplink (users talk to base stations) and downlink (base stations talk to users). In the uplink analysis, an exclusion ball model is proposed to incorporate the correlations between the locations of the scheduled users, and is shown to match the real user process with negligible error. Further, simple expressions for the distributions of the uplink signal-to-interference ratio (SIR) and rate are derived, and reveal the scaling law that to maintain the same aggregate uplink SINR distribution in the cell, the number of antennas should scale superlinearly with the number of scheduled users per cell, as a function of the path loss exponent. For the downlink, the asymptotic SINR and rate performance is investigated, when the number of antennas goes to infinity. The analysis shows that the downlink SIR coverage increases with larger path loss exponent. The results confirm the high cell throughput of massive MIMO systems in large-scale networks.

Click here for the presentation.

Capacity estimates of wireless networks in Poisson shot model over uniform or fractal Cantor maps

Philippe Jacquet , Bell Labs

We want to estimate the capacity of wireless networks under several models and several geometric assumptions. In all models we consider that all nodes communicate with a single fixed access point. We first provide a Shannon capacity upper bound, second we consider the flat outage capacity, and finally an optimized outage capacity. The nodes are randomly distributed in an infinite fractal Cantor map embedded in a space of dimension D, a model which is more realistic than the classic piecemeal uniform map model. When the map is piecemeal uniform, the capacities are independent of the densities and of the fading distributions when the maps are uniform. These constants are function of the space dimension $D$ and of the signal attenuation factor. When the map is purely fractal then the constant still holds but with the fractal dimension d_F replacing D. However in this case the capacities show small periodic oscillations around these constants when the node density varies. The practical consequence of this result is that the capacity increases significantly when the network map has a small fractal dimension

Click here for the presentation.

How user throughput depends on the traffic demand in large cellular networks

Mohamed Karray, Orange Labs, France

We assume a space-time Poisson process of call arrivals on the infinite plane, independently marked by data volumes and served by a cellular network modeled by an infinite ergodic point process of base stations. Each point of this point process represents the location of a base station that applies a processor sharing policy to serve users arriving in its vicinity, modeled by the Voronoi cell, possibly perturbed by some random signal propagation effects. User service rates depend on their signal-to-interference-and-noise ratios with respect to the serving station. Little's law allows to express the mean user throughput in any region of this network model as the ratio of the mean traffic demand to the steady-state mean number of users in this region.

Using ergodic arguments and the Palm theoretic formalism, we define a global mean user throughput in the cellular network and prove that it is equal to the ratio of mean traffic demand to the mean number of users in the steady state of the ""typical cell"" of the network. Here, both means account for double averaging: over time and network geometry, and can be related to the per-surface traffic demand, base-station density and the spatial distribution of the signal-to-interference-and-noise ratio. This latter accounts for network irregularities, shadowing and cell dependence via some cell-load equations.

Inspired by the analysis of the typical cell, we propose also a simpler, approximate, but fully analytic approach, called the mean cell approach. The key quantity explicitly calculated in this approach is the cell load. In analogy to the load factor of the (classical) M/G/1 processor sharing queue, it characterizes the stability condition, mean number of users and the mean user throughput. We validate our approach comparing analytical and simulation results for Poisson network model to real-network measurements.

Click here for the presentation.

Google maps and improper Poisson line processes

Wilfrid Kendall, University of Warwick, UK

The theory of random lines has a celebrated history, reaching back 300 years into the past to the work of Buffon, and forming a major part of the field of stochastic geometry. Recently it has found application in the derivation of surprising non-stochastic results concerning effective planar networks [1]. I plan to present an account of this, accessible to non-specialists, and also to describe more recent work concerning flows in related networks [2,3,4] and to introduce a rather curious random metric space [5].References:
1. David J. Aldous, WSK. Short-length routes in low-cost networks via Poisson line patterns. Advances in Applied Probability 40 (2008), no. 1, 1-21.
2. WSK. Networks and Poisson line patterns: fluctuation asymptotics. Oberwolfach Reports (2008), no. 5, 2670-2672.
3. WSK. Geodesics and flows in a Poissonian city. Annals of Applied Probability 21 (2011), no. 3, 801-842.
4. WSK. Return to the Poissonian City. Journal of Applied Probability (2014), 15A, 297-309.
5. WSK (2014). From Random Lines to Metric Spaces. Annals of Probability, (to appear), 46pp.

Click here for the presentation.

Random sets in economics, finance and insurance

Ilia Molchanov, University of Bern, Switzerland

Theory of random sets was greatly motivated by financial applications to economics in the path-breaking works of Aumann and Shapley 50 years ago that later on resulted in two Nobel prizes. While subsequently the theory of random sets was mainly applied in spatial statistics, networks, image analysis and material science, over the last ten years financial and econometrical applications came again to the frontline of the theory and greatly enriched it.

This mini-course aims to describe several modern directions of such applications with emphasis on statistical inference for partially identified models in econometrics, models with transaction costs in finance, and multivariate risk assessment covering both the multiasset case for a single agent and also a network of financial agents.

Click here for the presentation of first session.
Click here for the presentation of second session.

Stochastic Geometry and the User Experience in a Wireless Cellular Network

Sayandev Mukerjee,Docomo Innovations

The last five years have seen a remarkable increase in our knowledge of the behavior of wireless cellular systems, aided primarily by the use of point process models for the locations of the base stations (access points) and users. In particular, when the locations of the base stations and users in a cellular network are modeled by the points of independent homogeneous Poisson point processes (PPPs), a nearly complete picture of the distribution of the signal-to-interference ratio at user terminals and base stations may be obtained analytically without approximation.

This talk is intended to provide an overview of how stochastic geometry can give us insights into the " user experience " in a cellular wireless network. The metrics we consider to quantify this " user experience " are throughput, bit rate, and spectral efficiency. Further, each of these metrics is studied from two viewpoints that are related but distinct. The first perspective is that of a subscriber to the cellular service, who is interested in the data rate he/she receives anywhere in the network (as opposed to the data rate enjoyed by any other user). The second perspective is that of the cellular network operator, who is interested in the distribution of data rate across all users located all over the network.

It is quite remarkable that, as we shall see, stochastic geometry provides the tools to analyze not only the association rules whereby a serving base station is chosen for a given user, but also the data rate to that user when the chosen serving base station transmits to this user, and the area spectral efficiency over all transmissions made by a " typical " base station anywhere in the network. In addition, we shall touch upon the ergodic rate on a link to a user and its similarity to, and difference from, area spectral efficiency. Lastly, we shall provide a brief overview of generalizations to multi-tier, so-called heterogeneous, cellular networks, and to transmissions involving multiple cooperative base stations.

Click here for the presentation.

Mathematical tools for analysis, modeling and simulation of spatial networks on various length scales

Volker Schmidt , University of Ulm, Germany

Random point processes and random tessellations are fundamental classes of models in stochastic geometry. They can be used in order to describe structural properties of complex point patterns and cellular systems occurring in various disciplines and on various length scales, like telecommunication networks on geographic scales, and networks of particles, crystals and molecules in materials science on microscopic scales.

In this tutorial we discuss several examples of random point processes, including the homogeneous Poisson process, Poisson cluster processes, Poisson hard-core processes, and Cox processes. We show how these point-process models can be applied to describe the locations of objects in the plane and in higher-dimensional Euclidean spaces. Furthermore, we discuss examples of random tessellations, like Voronoi, Delaunay and Laguerre tessellations, which can used to describe, e.g., networks of communication paths and the boundaries of cells in cellular systems.

We also discuss the concept of Palm calculus for stationary point processes and stationary tessellations which can be used to justify algorithms for efficient (local) simulation of typical objects, e.g., the typical cell of a stationary tessellation.

Finally, we discuss several examples of structural characteristics of stochastic network models, and mathematical techniques in order to compute them, e.g., the distribution of typical shortest-path lengths, and other related structural characteristics, like connectivity, tortuosity and constrictivity of communication paths along the edges of stationary networks.

Click here for the presentation of first session.
Click here for the presentation of second session.

Application of Stochastic Geometry in Modeling Future LTE-A and 5G Wireless Networks

Charlie Zhang, Samsung Telecommunications America, USA

As radio access technologies continue to evolve to support increasing traffic demand and new services, system models have been critical for research into performance trends which can lead to the development of new algorithms. However, it is often challenging to accurately incorporate many of the complexities of real-world deployments. In this talk we discuss the evolution of system models within the wireless communications industry and highlight the need of flexible analytical and simulation techniques such as stochastic geometry in the evaluation of emerging wireless communications technologies including HetNets, mmW cellular networks, and D2D. In particular, we highlight the use of stochastic geometry in throughput analysis for sensing-based spectrum sharing D2D networks. The spatial false alarm probability and detection probability are analyzed and a closed-form expression of the spectrum sensing threshold is derived. Assuming channel inversion is used, we analytically characterize the achievable rates for D2D users and cellular users. We find that in these hybrid sensing-based D2D and cellular networks, there exists an optimal sensing radius to maximize the proportional throughput of the system.
Co-authors are Thomas Novlan, Hao Cheng and Lingjia Liu.

Click here for the presentation.

Generalised Eden growth model and random planar trees

Sergei Zuyev, Chalmers University of Technology, Sweden

In a classical Eden growth model, a crystal on a grid extends to nearby nodes with the rate proportional to the number of already crystallised neighbours. This model is equivalent to first-passage percolation with exponential passage time. An extension of this model is to allow a general weight for different number of crystallised neighbours. Classical subadditivity arguments allow establishing existence of a limiting shape for the most natural case of monotone rates: the more neighbours are crystallised, the sooner a boundary node crystallises too. However, a real challenge represent non-monotone rates where all the classical approaches to establishing the shape result fail. Computer simulations suggest that even in this case a limiting shape does exist. We present partial results for such models and discuss some of their intriguing features: existence of holes, the exponential bound of their size and possible long-range structural dependence.

Click here for the presentation.

Submitted Talk Abstracts(Click to show/hide)

Optimal routing in random ad-hoc networks with multiple antennas

Itsik Bergel, Bar-Ilan University, Israel

We analyze and optimize the performance of distributed routing schemes in multihop wireless ad-hoc networks. We assume that the nodes are distributed according to a Poisson-Point-Process (PPP) and consider routing schemes that select the next relay based on geographical locations and local knowledge of the channel state (CSI). At the first stage we define the optimization problem and give an expression for the optimal routing metric in a network that achieves the ergodic rate density (ERD) in each link. We then use a recently published bound on the ERD, and present a low complexity, sub optimal routing metric. Both metrics are derived for the cases of single antenna and multiple antenna nodes, and are also able to locally optimize the instantaneous number of spatial streams and the precoding weights. Numerical results demonstrate that the low complexity scheme performs nearly as good as the optimal scheme, and both schemes outperform previously published schemes.

Click here for the presentation.

Estimating the transmission probability in wireless networks with configuration models.

Paola Bermolen, Univeristy of the Republic, Uruguay

We propose a new methodology to estimate the probability of successful transmissions for random access scheduling in wireless networks. Instead of focusing on spatial configurations of users, we model the interference between users as a random graph. Using configuration models for random graphs, we show how the properties of the medium access mechanism are captured by some deterministic differential equations, when the size of the graph gets large. Performance indicators such as the probability of connection of a given node can then be efficiently computed from these equations. We also perform simulations to illustrate the results on different types of random graphs. Even on spatial structures, these estimates get very accurate as soon as the variance of the interference is not negligible. (Joint work with: M.Jonckheere, F.Larroca, P.Moyal)

Click here for the presentation.

Large-deviation principles in SINR-based wireless network models

Chirstian Hirsch,Weierstrass Institute for Applied Analysis and Stochastics

Large-deviation theory is an established field of probability that is used to study the probability of rare events in large systems, and it can bee seen as one of the mathematical foundations of statistical physics. The classical process-level large-deviation principle for Poisson point processes due to H.-O. Georgii and H. Zessin (1993) has found numerous applications in statistical physics. We show how this tool can be used to answer two questions arising in SINR-based ad-hoc wireless networks consisting of a Poisson point process of transmitters and a Poisson point process of receivers. First, we establish a large-deviation principle for the proportion of transmitters in a large box that experiences some undesired event. For instance, this could be the event of isolation, i.e., the inability to transmit messages to any receiver. Second, we consider the isolation probability of a typical transmitter and show that it decays exponentially fast as the SINR-based connection-threshold tends to 0. Moreover, we provide a variational characterization of the rate of decay. Finally, we show how this characterization leads to an importance sampling algorithm that can be used to provide an accurate estimation of the actual numerical value of the isolation probability.

This talk is based on a joint project with B. Jahnel, P. Keeler, W.,Koenig and R. I. A. Patterson.

Click here for the presentation.

When do wireless network signals appear Poisson?

Paul Keeler, Weierstrass Institute for Applied Analysis and Stochastics

The majority of stochastic geometry models of wireless networks are based on the Poisson point process, which is justified due to its clear tractability. Recently, there have been some convergence results showing that non-Poisson networks of wireless transmitters can appear (in terms of signal strengths) to a single observer in the network, provided sufficiently strong log-normal shadowing. In this spirit, we present a convergence that shows that under general conditions, satisfied by a wide class of fading models and path-loss functions, the point process of signal strengths can be well-approximated by an inhomogeneous Poisson or Cox point processes on the positive real line. Under appropriate conditions, these results support the use of a spatial Poisson point process for the underlying positioning of transmitters in models of wireless networks, even if in reality the positioning does not appear Poisson.

Click here for the presentation.

Inter-operator resource sharing, stochastic geometry, and the future of wireless networks

Luis da Silva,Trinity College, Dublin

As wireless operators face enormous projected increases in demand without corresponding revenue growth, increased network sharing is inevitable. This trend manifests itself in the push for resource virtualization, dynamic spectrum access, and active infrastructure sharing among operators. In this presentation, we will articulate our vision for future wireless networks, which relies on the virtualization of the wireless access, and how stochastic geometry can help us understand the potential and limits of this vision. As a first step, we rely on cellular deployment data from three European countries to argue for the use of clustered point processes to model multi-operator deployments. From there, we outline some of the open questions in resource management for shared networks that can be addressed through stochastic geometry and call for collaborations in solving those questions.

Click here for the presentation.

Achieving Non-Zero Information Velocity in Wireless Networks

Rahul Vaze,Tata Institute of Fundamental Research, Mumbai

In wireless networks, where each node transmits independently of other nodes in the network (the ALOHA protocol), the expected delay experienced by a packet until it is successfully received at any other node is known to be infinite for signal-to-interference-plus-noise-ratio (SINR) model with node locations distributed according to a Poisson point process. Consequently, the information velocity, defined as the limit of the ratio of the distance to the destination and the time taken for a packet to successfully reach the destination over multiple hops, is zero, as the distance tends to infinity. A nearest neighbor distance based power control policy is proposed to show that the expected delay required for a packet to be successfully received at the nearest neighbor can be made finite. Moreover, the information velocity is also shown to be non-zero with the proposed power control policy. The condition under which these results hold does not depend on the intensity of the underlying Poisson point process.

Click here for the presentation.