# A Distributed Dynamic Frequency Allocation Algorithm

###### Abstract

We consider a network model where the nodes are grouped into a number of clusters and propose a distributed dynamic frequency allocation algorithm that achieves performance close to that of a centralized optimal algorithm. Each cluster chooses its transmission frequency band based on its knowledge of the interference that it experiences. The convergence of the proposed distributed algorithm to a sub-optimal frequency allocation pattern is proved. For some specific cases of spatial distributions of the clusters in the network, asymptotic bounds on the performance of the algorithm are derived and comparisons to the performance of optimal centralized solutions are made. These analytic results and additional simulation studies verify performance close to that of an optimum centralized frequency allocation algorithm. It is demonstrated that the algorithm achieves about 90 of the Shannon capacities corresponding to the optimum/near-optimum centralized frequency band assignments. Furthermore, we consider the scenario where each cluster can be in active or inactive mode according to a two-state Markov model. We derive conditions to guarantee finite steady state variance for the output of the algorithm using stochastic analysis. Further simulation studies confirm the results of stochastic modeling and the performance of the algorithm in the time-varying setup.

## I Introduction

Dynamic frequency allocation has an important role in the performance of wireless ad-hoc networks, for it results in less transmission power, which is a crucial objective in communication networks. To do this in an optimal way, there needs to be a centralized processor with full knowledge of the spatial distribution profile of the network clusters. However, in many emerging wireless networks (such as ad hoc networks, cognitive radios, etc.), no central frequency allocation authority is naturally available. This makes distributed frequency allocation an important, but mostly unchartered territory in wireless networking.

Centralized frequency allocation has been extensively studied in the literature (Please see [8] and [11]). There are a number of proposed solutions to similar problems in different contexts (Please see [2], [4], [6], [9], [10], and [12]). These include methods based on graph coloring for cognitive networks, iterative waterfilling for Digital Subscriber Lines (DSL), etc. These approaches either excessively simplify the interference models, or are not fully decentralized, or require too much information exchange between autonomous nodes/clusters, or suffer from all these shortcomings. Additionally, they are all too complex to implement. In [10], the approach is based on approximating the optimal resource allocation solutions on a graph. Others [9] propose that secondary users choose their spectrum according to their information about their local primary and secondary neighbors. They employ a simplified model for mutual interference of the network nodes that turns the problem into the graph multi-coloring problem. They subsequently compute a sub-optimal solution to the graph multi-coloring by using an approximation algorithm to the graph labeling problem.

In the context of Digital Subscriber Lines (DSL), some recent works regarding spectrum balancing have been done (Please see [2] and [12]). The objective of spectrum balancing in DSL systems is to maximize the throughput of each user by shaping its Power Spectral Density (PSD) of transmission, satisfying a certain power constraint. In [12], a method of iterative waterfilling is proposed in order to solve the problem. In the case of two users, they show the existence and conditions on the uniqueness of a Nash equilibrium point for the iterative algorithm. However, each user must know a weighted sum of the PSD of the other users (interference), in order to do waterfilling. The iterative waterfilling algorithm has high complexity and the resulting Nash equilibrium point is not necessarily the optimal solution. For instance, in a two-user scenario, if both users start with a flat PSD initially, iterative waterfilling does not change their PSD. This is clearly a Nash equilibrium point, but is far away from the optimal answer. In [4], it is shown that this non-optimal Nash equilibrium point might be the only Nash equilibrium, and therefore iterative waterfilling fails for various scenarios.

In [2], the users need to balance their power along a number of tones in order to optimize their throughout under power constraints. The optimization problem is relaxed based on introducing a virtual user with fixed thresholds. It turns the problem into a separable optimization problem across the tones for different users. An algorithm has been proposed to solve the relaxed problem iteratively via solving local optimization problems by the users. The knowledge of a weighted sum of the PSD of the other users (interference) is required for each user to solve its local optimization problem. The convergence of the algorithm has been shown in high SNR regime. Simulations show that the achievable region resulted by the solution of the relaxed distributed optimization is close to that of the optimal centralized solution. However, no one-to-one correspondence between the points of the achievable regions of the optimal (centralized) and decentralized algorithms is guaranteed. Therefore, the algorithm does not necessarily converge to optimal values. For the case of asynchronous transmission (in the presence of ICI), the optimization problem is not separable across the tones. They have therefore used heuristic optimization approaches with no convergence guarantees.

In [4], it is shown that the problem of optimal PSD shaping across the users is reducible to that of allocating piece-wise constant powers. This result reduces the complexity of the spectrum sharing problem. Furthermore, a number of achievability and existence results in the context of non-cooperative and cooperative game theory for obtaining efficiency and fairness have been established. Another approach has been presented in [6], where each user in the network announces a price to the other users to adapt the power allocation accordingly. Convergence results have been established using supermodular game theory.

In the model we consider, the nodes are divided into different clusters and each cluster is represented with a cluster head. This is motivated by the fact that various networks are naturally clustered (e.g., combat scenario, WLAN Hotspots, WPAN). Each cluster head, having knowledge about the interference it experiences, chooses the frequency band with the least amount of interference from the other clusters. The channel model we consider is the common path loss model, which gives a more refined model than that of the existing literature. It is shown that this distributed strategy converges to a sub-optimal spectrum assignment, without any cross-cluster information exchange. In other words, the algorithm converges to a local minimum of the aggregate interference of the network. Simulation results (Section V) show that the minimization of the aggregate interference of the network results in a sub-optimal solution for the problem of maximizing the aggregate Shannon capacity of the network links. It must be noted that regardless of the model, channel reciprocity is sufficient for the convergence of the algorithm.

The proposed algorithm provides a simple, fully distributed, dynamic frequency allocation strategy that requires neither any information exchange between autonomous devices, nor even any knowledge of the existence of other autonomous entities. Additionally, the proposed algorithm can be used in conjunction with any realistic wireless radio channel model such as those commonly employed in wireless standards (Hata model, Okumura model, etc.). We will further propose performance bounds on this sub-optimal spectrum assignment for some specific network topologies.

We also present a framework to analyze the scenario that clusters can be in sleep or active mode, and go off and on according to time-varying statistics. In our model, the activity of the clusters is described by a stochastic process. For the simplicity of the analysis, we assume that all the clusters go on and off according to a two-state Markov model independently. We have shown that the temporal dynamics of the algorithm can be accurately modeled by an exponential decay. We have derived a stochastic dynamical equation for the evolution of the aggregate interference of the network. The time variation of the activity of clusters is also included in the model, which results in a stochastic differential equation for the steady state behavior of the algorithm. We have further derived a trade-off inequality in terms of the update rate, switching rate between sleep and active mode and the geometrical properties of the distribution of users, guaranteeing a certain steady state variance on the performance of the algorithm. Simulation results verify the accuracy of stochastic modeling and the performance of the algorithm in the presence of time-varying statistics.

The outline of this paper follows next. In Section II, we discuss the system model and the assumptions employed for our analysis. We disclose the Main Algorithm in Section III. In Section IV, we present the main results of this paper regarding the convergence and performance of the main algorithm. Simulation results for some specific network topologies are provided in Section V. Finally, we discuss the contributions of the proposed method and future areas of research in Section VI. The proofs of main results can be found in Appendices A and B.

## Ii System Model

Suppose that we have a set of network nodes distributed in space such that they can be partitioned into a union of clusters. These networks often happen in nature, for instance, in a combat scenario a group of soldiers can be divided into a number of clusters according to their missions. Communication within clusters is then very desired. There are a number of efficient methods for partitioning the network elements, but this topic is not the focus of this paper. We assume that the clusters are already formed in a specified manner.

In light of the above, our network model is given by collection of nodes in clusters, , , where each cluster has a cluster head responsible for managing some of the network functions. Let denote the distance between the cluster heads of and (Fig. 1). We use the following assumptions for the model:

The th cluster, , contains users.

At each time slot for any cluster *at most* one user is transmitting and one user is receiving. For simplicity of mathematical analysis, we have assumed that at each time slot for any cluster *exactly* one user is transmitting and one user is receiving (See Sections IV-F and V). This assumption can be relaxed to any scenario satisfying channel reciprocity between clusters. For instance, in an alternative scenario each user transmits and receives its data through the cluster head. This model also satisfies the channel reciprocity conditions. Therefore, the results can be generalized to various other models.

Each user transmits with power , where is a constant such that the power at meter is . The assumption of equal transmission powers can be further relaxed and is adopted for mathematical convenience.

The distances between clusters are much larger than the size of clusters and bounded below by a distance .

The transmission model is path loss with exponent . No shadowing and fading is assumed in the current analysis. However, the analysis can be generalized to more realistic models of transmission.

The accessible spectrum is divided into different bands, denoted by .

At time , the th cluster is in state , corresponding to the index of the transmission band it is using.

The probability of two clusters updating their frequency bands at the same instance of time is negligible. This assumption can be relaxed [1].

The rate of change of the spatial distributions of the clusters in the network is much less than the processing/transmission rate. Therefore, the topology of the network is assumed to be fixed in the analysis of the frequency allocation algorithm.

We choose the aggregate interference of the network as our performance metric. This assumptions makes the analysis mathematically tractable and is a reasonable metric for performance comparison. Simulation results in Section V show that the minimization of the aggregate interference of the network results in a near-optimal solution for the problem of maximizing the aggregate Shannon capacity of the network links.

## Iii The Algorithm

Using the above assumptions, we approximate the th cluster, , by a single node with transmission power within a distance from the other clusters. The interference experienced by caused by all the other clusters is therefore,

(1) |

where is the Kronecker delta function, defined as

(2) |

At time , one of the clusters, say , updates its transmission frequency band. The nature of the update procedure is asynchronous for all the clusters. This is intuitively appealing, because of the nature of ad-hoc networks, where there is usually no common clock among the nodes. We assume that the updates are taking place at times . We can therefore change the continuous-time interference model in Eq. (1) to a discrete-time version as

(3) |

where corresponds to the time , when an update is taking place. Let denote the set of clusters transmitting in band prior to time . Also, let denote the interference experienced by caused by all the clusters in if was transmitting in band , for . can be written as

(4) |

We denote the aggregate interference of the network at time by as

(5) |

It must be noted that for notational convenience, we drop the time dependence of the functions , and following the convergence of the algorithm or whenever the spatial configuration is fixed over time, and denote them by , and , respectively. We also denote the aggregate interference of the worst-case scenario, optimal scenario and that of the output of the algorithm by , and , respectively. We can now define the main algorithm:

Main Algorithm: *Clusters scan all the frequency bands in an asynchronous manner over time. Each cluster chooses the frequency band in which it experiences the least aggregate interference from other clusters. In other words, at time , a cluster, say , updates its state according to the following rule*

(6) |

*where is the new state of updated at time .*

For this purpose, the cluster head scans all the frequency bands and estimates/measures the interference it experiences in each frequency band. The cluster head chooses the new transmission frequency band according to Eq. (6), the decision criterion in the Main Algorithm.

## Iv Main Results

### Iv-a Convergence

###### Theorem IV.1

Given any reciprocal channel model, the Main Algorithm converges to a local minimum in polynomial time in .

The proof is given in Appendix A.

### Iv-B Performance Bounds

###### Theorem IV.2 (Upper Bound)

Let denote the aggregate interference of all the clusters corresponding to the state of the algorithm following convergence (see Theorem IV.1), and to be the aggregate interference for the worst case interference scenario (all clusters transmitting in one frequency band), then

The proof is given in Appendix B-A.

In order to obtain further performance bounds for the Main Algorithm, one must analyze its behavior following convergence, over a configuration induced by the spatial configuration of the network nodes. However, such an analysis for the general case is non-trivial. We therefore focus our attention on a specific class of spatial configurations, where all the clusters are co-linear (lie on a line). This is referred to as a *Linear Array*. For a given , we assume that the clusters are located in , where is a constant.

###### Definition IV.3

A *Uniform Linear Array* of clusters is a Linear Array in which , for .

###### Theorem IV.4 (Optimal Strategy)

Let be the aggregate interference of the optimal strategy for a given linear array of clusters located in . Then,

(7) |

where is the Riemann zeta function.

The proof is given in Appendix B-B.

###### Theorem IV.5

Uniform Linear Array achieves the bound in the statement of Theorem IV.4, as .

The proof is given in Appendix B-C.

###### Theorem IV.6

If and in the statement of Theorem IV.5, then the optimal strategy is the alternating assignment for any .

The proof is given in Appendix B-D.

We can use Theorems IV.4 and IV.5 to upper bound the performance of the algorithm as given by the following Corollaries:

###### Corollary IV.7

Let denote the aggregate interference corresponding to the output of the Main Algorithm and be the aggregate interference of the optimal strategy. Then, for any spatial configuration of clusters in we have

(8) |

as .

The proof is given in Appendix B-E.

###### Corollary IV.8

If the array in the statement of Corollary IV.7 is a Uniform Linear Array, then

where and for a Uniform Linear Array in .

The statement of the Corollary follows by letting in Corollary IV.7.

### Iv-C Time-varying Setup

For cluster , , we consider an activity indicator state , such that and correspond to being active and inactive at time , respectively. Let and be the probability of being in activity indicator state and at time , respectively. The evolution of the probabilities is given by:

(9) |

which corresponds to a symmetric two-state Markov model. We assume for all , .

In this case, the convergence of the algorithm is not guaranteed. Furthermore, the upper bound on the performance of the algorithm does not hold in presence of time-varying statistics. In order to further analyze the performance of the algorithm, we use the following additional assumptions:

The clusters update their frequency band asynchronously according to the same temporal statistics.

The update process is modeled by a Poisson process of rate , *i.e.*, each cluster updates its frequency band with a rate .

The number of users switching on/off in each time slot is much less than the total number of users (near equilibrium scenario).

### Iv-D Dynamics of The Algorithm

Let denote a continuous-time approximation to the aggregate interference of the network at time . It can be shown that

(10) |

where denotes the ensemble average (over different update patterns), and is geometrical constant showing the effective number of interacting neighbors to a cluster including itself (the details are given in Appendix C-A).

Eq. (10) is an approximate differential equation describing the behavior of the algorithm near equilibrium. In other words, the aggregate interference decreases exponentially with rate near the equilibrium point.

We can model the change in the number of active clusters by two Poisson counters of rate [5]. We associate two independent Poisson counters and to the users going on and off, respectively. We have

(11) |

Each cluster when activated, approximately experiences the instantaneous normalized aggregate interference of the network, due to spatial ergodicity. If we define and , under the assumption of being small compared to , we have the new dynamics in the Itô form [7] as

(12) |

Under this model, in the steady state the variance settles down to

(13) |

given

(14) |

(please see Appendix C-B for details).

According to our model for cluster activities give by Eq. (9), . Thus, we get the following trade-off inequality

(15) |

in order to have a finite variance in the steady state.

### Iv-E Discussion of the Results

Theorem IV.1, guarantees the convergence of the Main Algorithm in polynomial time in , regardless of the configuration of the nodes in the network. The first performance result is stated in Theorem IV.2, which gives an upper bound on the performance of the algorithm. This result is independent of the topology of the network and is a direct consequence of the structure of the Main Algorithm.

In order to obtain further performance bounds for the Main Algorithm, as stated earlier, we have chosen the class of Linear Arrays for our analysis. Theorem IV.4 is the main result on the asymptotic performance of the Main Algorithm for Linear Arrays. The Riemann zeta function in the statement of Theorem is merely a consequence of the path loss model for the transmission and the fact that at each time slot, only one transmitter and one receiver are active in each cluster. However, the achievability of the bound in Theorem IV.4 is not trivial, since the optimal strategy of frequency allocation is not known for a general spatial configuration of the clusters. Theorem IV.5 proves the asymptotic achievability of the bound in the statement of Theorem IV.4 for Uniform Linear Arrays. Theorem IV.5, states that the alternating frequency band allocation achieves the bound in Theorem IV.4 as , and is therefore the optimal strategy of frequency allocation. In order to analyze the optimal strategy for finite , we focus our attention on the case of , *i.e.*, two frequency bands. Theorem IV.6, states that the optimal strategy is the alternating frequency allocation for finite , when .

Corollaries IV.7 and IV.8 compare the performance of the Main Algorithm with respectively that of the optimal strategy for Linear Arrays and Uniform Linear Arrays. This is done by combining the upper bound of Theorem IV.2 with the lower bound of Theorem IV.4. The bound in Corollary IV.7 is not as tight as the original bounds, since it is worst-case and applicable to any spatial configuration of clusters. As the simulation results show in next section, the algorithm performs significantly better than these bounds.

The stochastic analysis for the time-varying case results in a simple trade-off inequality for design purposes. In other words, for a given time-varying statistics, we can design the update rate to guarantee a finite steady state variance.

The geometrical parameter can be empirically estimated for different network topologies. However, theoretical estimates are possible. For instance, for a uniform linear array of clusters, each cluster has two effective neighbors. Therefore, an estimate of seems reasonable and is verified by simulation results (Please see Section V). If for some , as , the inequality (14) always holds. Therefore, the algorithm converges in both mean and variance in the sub-linear regime.

### Iv-F Extension of the Results

Generalization of the result of Theorem IV.6 to is not straightforward, since the combinatorial possibilities of the assignments grow exponentially with . Furthermore, generalization of all the above results to higher dimensions is a non-trivial problem. The reason is that in dimensions greater than 1, the degrees of freedom for the cluster interactions increase dramatically. We are currently studying this scenario extensively.

As noted earlier, any model with channel reciprocity suffices for the convergence of the Main Algorithm (See the proof in Appendix A). Moreover, the upper bound on the performance of the algorithm given in Theorem IV.2 holds for any model with channel reciprocity. The other performance bounds, in their current format, rely on the specific path loss model. However, it may be possible to generalize the same method of analysis to other reciprocal channel models.

The stochastic analysis can be generalized to other statistical models for the activity of the clusters over time. The generalization of the results to is straightforward.

## V Simulation Results

Fig. 2 shows the performance of the algorithm on different arrays of 100 clusters in one and two dimensions. Fig. 2 (a), (b) and (c) show the normalized aggregate Shannon capacity (both the optimum/near-optimum value and that of the output of the algorithm) and the normalized aggregate interference of the network as a function of time for a uniform linear array (with ), integer (rectangular) lattice (with ) and a hexagonal lattice (with ), respectively (Please see [3]). The normalized aggregate Shannon capacity is defined as the aggregate Shannon capacity divided by the number of clusters. Similarly, the normalized aggregate interference is defined as the aggregate interference divided by the number of clusters. Here , and . For the rectangular and hexagonal lattices, the computation of the optimum frequency band assignment is very complicated and finding it by exhaustive search was beyond the capabilities of our simulation platforms. Instead, we have compared the performance of the algorithm to that of the frequency reuse pattern as a near-optimal candidate. As it can be observed from the figure, the minimization of the aggregate interference results in an overall increasing behavior of the capacity. In all cases more than 90 of the capacity of the optimal (near-optimal) centralized frequency assignment is achieved.

Fig. 3 shows the performance of the algorithm on a uniform linear array along with the lower and upper bounds we have obtained in Section IV-B. Here , , and . For the initial condition of the algorithm, we let all the clusters to be in frequency band . The updates are repeated until the convergence is achieved. As we observe from the figure, the algorithm performs significantly better than the upper bound we have obtained and is less than dB away from the alternating assignment and the lower bound.

Fig. 4 (a) and (b) show the performance of the Main Algorithm, the worst case and the frequency reuse pattern for rectangular and hexagonal arrays of clusters, respectively. Here , , and . Although we have not established any lower performance bounds for two-dimensional arrays, the algorithm performs very closely to the centralized 1:4 frequency reuse solution.

In Fig. 5, the dynamics of the algorithm for a uniform linear array of 100 clusters is shown. Here which corresponds to the case of no time-varying statistics. The empirical curve is averaged over 500 different update patterns. The theoretical curve corresponds to an exponential with rate for and . As it can be observed from the figure, the theoretical estimate matches the empirical curve very well.

Fig. 6 shows the theoretical and empirical steady state variance of the aggregate interference vs. switching rate () for a uniform linear array of 100 clusters. The empirical curve is obtained by averaging over 500 different realizations of the update process. As it can be observed from the figure, the theoretical variance (with ) matches the empirical variance.

## Vi Conclusion

We have proposed a distributed algorithm for finding a sub-optimal frequency band allocation to the clusters in a network. We have also derived some performance bounds for the special case of linear arrays of clusters. Simulations prove that the algorithm performs significantly better than the performance bounds we have established. We have also derived a stochastic differential equation describing the behavior of the algorithm in presence of time-variation in the activity of clusters near the equilibrium point. A trade-off inequality to guarantee stability in the performance of the algorithm is established. The stochastic modeling framework opens the possibilities of both open loop and closed loop stochastic control. These problems are currently being studied.

## Appendix A Proof Of the Convergence Result

First we prove the following Lemma:

###### Lemma A.1

is a non-increasing function of that is bounded from below.

First of all, we need to show that has a lower bound. We know that the aggregate interference of all the clusters is a non-negative quantity. Therefore, for all .

Secondly, we need to show that is a non-increasing function of . Without loss of generality, we assume that has been transmitting in band . can be written as

(16) |

where we have used the channel reciprocity. After the update, the algorithm implies that chooses the new band according to the decision criterion (6) in the statement of the Main Theorem. Therefore,

where satisfies the decision criterion (6). According to Eq. (6), we have , for all . Therefore,

(18) |

which gives the statement of the Lemma. It must be noted that channel reciprocity is sufficient for this proof to hold.
*Proof of Theorem IV.1:*

According to Lemma A.1, is a lower bounded non-increasing function of . It can take at most distinct values, corresponding to the different frequency band assignments to the clusters. Therefore, such that we have
and no cluster updates its state up to an isomorphism from to itself.

After each update, the change in is at least of order , where . The aggregate interference of all the clusters is of order . Therefore, we need at most switches to reach the final configuration of the algorithm. After any round of updates, during which all the clusters have updated their state, at least one cluster changes its frequency band. Therefore, the total number of updates is , which is bounded by a polynomial in . Therefore, the Main Algorithm converges to a local minimum in polynomial time in .

## Appendix B Performance Bounds Proofs

### B-a Proof of Theorem iv.2

###### Lemma B.1

Let be the interference on when all the other clusters are co-band with it. Then,

At time , chooses a frequency band, say such that , for all . Therefore,

(19) |

which gives the statement of the Lemma.
*Proof of Theorem IV.2*:

Let denote the value of
following convergence. Using the result of Lemma B.1, the aggregate interference can
be written as

(20) |

which gives the statement of the Theorem.

### B-B Proof of Theorem iv.4

###### Lemma B.2

Let denote the aggregate interference of the worst case scenario for a uniform linear array of clusters in . As , we have

(21) |

where is the Riemann zeta function.

We have

Let denote the interference experienced by the cluster located at , when all the other clusters are co-band with it, for a uniform linear array of clusters in . We can write

(23) |

We need to show that for any small enough , such that for ,

(24) |

Clearly for all ,

(25) |

Therefore,

(26) |

If we choose large enough so that , for all we will have

(27) |

which proves the statement of the Lemma.

###### Lemma B.3

For a given distribution of clusters in , we have

as .

Let’s assume that is even. Suppose that we fix the first cluster at 0 and the th cluster at . We define a vector interaction field between any two clusters as with , and suppose that each cluster is denoted by a point on the interval interacting with the others according to our vector field. Therefore, if we let the points be initially distributed on the line with distances , the final equilibrium configuration will be when is minimized. This is because acts as a potential function for the system due to the definition of the interaction vector field. Let denote the final distances of the clusters following equilibrium. If we consider as a function of adjacent distances for , it will have the following form:

(28) |

which is clearly a convex function of , . To find for in the equilibrium configuration, we need to minimize subject to the constraint . Since, the constraint is linear in for and the second derivative of with respect to any , is a sum of some positive terms, there is no , for which the second derivative of , the objective function, is zero. Therefore, the minimization has only one unique solution which is the equilibrium state (This is intuitively clear from the physical model of a number of points interacting on a one dimensional lattice). Since the system is in equilibrium under the distribution given by , we have

(29) |

where denotes the exact differential and . We claim that if we choose for all , then for every small enough, we can find an so that for all we have:

(30) |

This means that we can get arbitrarily close to the equilibrium point of the system. Since, the objective function is continuous and differentiable with respect to , for , we have chosen to approach the equilibrium point uniformly with respect to all , for , *i.e.*, choosing the same amount of variation for all the coordinates. From the properties of a potential function for a vector field, we know that

(31) |

where is the aggregate interaction field of all the other particles on the th particle. If we set for all the users, then

(32) |

for all , and for . Therefore, we can bound the expression in the righthand side of Eq. (31) as

(33) |

where . Clearly, . Therefore, if we choose so that , similar to the proof of Lemma B.2, we can choose large enough to make sure that Eq. (30) holds. This means that the uniform spatial configuration asymptotically coincides with the global minimum of the system, which is the optimal spatial distribution of clusters with least aggregate interference. Since the result holds for large , fixing the first and last cluster at and does not affect the result. The case for odd can be treated similarly, since excluding one of the users does not change the asymptotic result. Lemma B.3 gives an important result: the worst-case aggregate interference of any spatial distribution of users in is greater than or equal to that of a uniform linear array of clusters in , for sufficiently large .

###### Lemma B.4

Let denote the set of clusters in frequency band corresponding to the optimal assignment strategy for a given spatial configuration of clusters in . Let denote . Then, is an unbounded sequence, for all and any spatial configuration of clusters in .

Let and . Let’s assume that is a bounded sequence. That is, there exist numbers and and a specific spatial configuration of the clusters for any , such that . Let correspond to frequency band (whereas dependence on is implicit). We have

(34) |

where we use the additional argument of to show the implicit dependence of the aggregate interference on the number of frequency bands. We take the same ensemble for and assign any configuration of frequency bands for to all and keep the frequency assignment of the rest of the clusters as in the optimal strategy. Let denote the index corresponding to the new frequency band assigned to . The normalized aggregate interference of this new ensemble will be

We also know that , where is the aggregate interference of the optimal frequency band assignment to the same ensemble, using frequency bands. We define

(36) |

(37) |

Then, we have

Both and are upper bounded by , where is the minimum distance between two clusters, based on our assumptions. We have

This is clearly not possible. Suppose that we have frequency bands . Consider the optimal frequency band assignment to the clusters for in . Since , there exists a such that . If we consider the clusters in and allow the assignment of an additional frequency band to them, then according to Theorem IV.2, there exists a frequency band assignment using the two bands and , to the clusters in , for which the normalized aggregate interference of all clusters in is at least half of that in the worst case scenario. Therefore, adding a frequency band , decreases the normalized aggregate interference by at least

(39) |

for sufficiently large , according to Lemma B.3. Therefore, can not be arbitrarily close to for any spatial configuration of clusters, as . Thus, is an unbounded sequence. Since and is a finite set, we conclude that is an unbounded sequence for all .

###### Lemma B.5

Let denote corresponding to the optimal strategy for a given spatial configuration of clusters in . Then, we have , for sufficiently large . for all

Using the result of Lemma B.3, we can write

(40) |

since the clusters are located in and is unbounded for all (Lemma B.4) as .

*Proof of Theorem IV.4*:
We have